From wkt at tuhs.org Wed Jul 4 13:09:56 2018 From: wkt at tuhs.org (Warren Toomey) Date: Wed, 04 Jul 2018 13:09:56 +1000 Subject: [COFF] New List is Open Message-ID: Hi all. The new COFF mailing list is open & has about twenty subscribers. Feel free to use it! Cheers, Warren -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wkt at tuhs.org Thu Jul 5 15:56:50 2018 From: wkt at tuhs.org (Warren Toomey) Date: Thu, 5 Jul 2018 15:56:50 +1000 Subject: [COFF] Other OSes? Message-ID: <20180705055650.GA2170@minnie.tuhs.org> OK, I guess I'll be the one to start things going on the COFF list. What other features, ideas etc. were available in other operating systems which Unix should have picked up but didn't? [ Yes, I know, it's on-topic for TUHS but I doubt it will be for long! ] Cheers, Warren From spedraja at gmail.com Thu Jul 5 16:29:29 2018 From: spedraja at gmail.com (SPC) Date: Thu, 5 Jul 2018 08:29:29 +0200 Subject: [COFF] Other OSes? In-Reply-To: <20180705055650.GA2170@minnie.tuhs.org> References: <20180705055650.GA2170@minnie.tuhs.org> Message-ID: 2018-07-05 7:56 GMT+02:00 Warren Toomey : > OK, I guess I'll be the one to start things going on the COFF list. > > What other features, ideas etc. were available in other operating > systems which Unix should have picked up but didn't? > > ​I've been thinking about this many times in the past and I've found ever the same problem: the framework of reference. My choice has always been to use the timeline as the first variable; and second, the characteristics, power and size of the hardware on which UNIX would run. But I think that a third variable is needed: other hardware systems with other operating systems running. And as a final step, a detailed list of the aspects and features os these OS's and UNIX selecting the common features of all... or the uncommon. Cordiales saludos / Kind Regards. Gracias | Regards - Saludos | Greetings | Freundliche Grüße | Salutations ​ -- *Sergio Pedraja* ----- No crea todo lo que ve, ni crea que está viéndolo todo ----- "El estado de una Copia de Seguridad es desconocido hasta que intentas restaurarla" (- nixCraft) ​ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bakul at bitblocks.com Thu Jul 5 16:40:36 2018 From: bakul at bitblocks.com (Bakul Shah) Date: Wed, 4 Jul 2018 23:40:36 -0700 Subject: [COFF] Other OSes? In-Reply-To: <20180705055650.GA2170@minnie.tuhs.org> References: <20180705055650.GA2170@minnie.tuhs.org> Message-ID: On Jul 4, 2018, at 10:56 PM, Warren Toomey wrote: > > OK, I guess I'll be the one to start things going on the COFF list. > > What other features, ideas etc. were available in other operating > systems which Unix should have picked up but didn't? > > [ Yes, I know, it's on-topic for TUHS but I doubt it will be for long! ] - Capabilities (a number of OSes implemented them -- See Hank Levy's book: https://homes.cs.washington.edu/~levy/capabook/ - Namespaces (plan9) From clemc at ccc.com Fri Jul 6 01:23:04 2018 From: clemc at ccc.com (Clem Cole) Date: Thu, 5 Jul 2018 11:23:04 -0400 Subject: [COFF] Other OSes? In-Reply-To: References: <20180705055650.GA2170@minnie.tuhs.org> Message-ID: On Thu, Jul 5, 2018 at 2:40 AM, Bakul Shah wrote: > On Jul 4, 2018, at 10:56 PM, Warren Toomey wrote: > > > > OK, I guess I'll be the one to start things going on the COFF list. > > > > What other features, ideas etc. were available in other operating > > systems which Unix should have picked up but didn't? > > > > [ Yes, I know, it's on-topic for TUHS but I doubt it will be for long! ] > > - Capabilities (a number of OSes implemented them -- See Hank Levy's book: > https://homes.cs.washington.edu/~levy/capabook/ > - Namespaces (plan9) > > ​+1 for both​ > > > > _______________________________________________ > COFF mailing list > COFF at minnie.tuhs.org > https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff > ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From scj at yaccman.com Fri Jul 6 06:49:58 2018 From: scj at yaccman.com (Steve Johnson) Date: Thu, 05 Jul 2018 13:49:58 -0700 Subject: [COFF] Other OSes? In-Reply-To: Message-ID: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> That's an interesting topic, but it also gets my mind thinking about UNIX features that were wonderful but didn't evolve as computers did. My two examples of this are editor scripts and shell scripts.   In the day, I would write at least one shell script and several editor scripts a day.  Most of them were 2-4 lines long and used once.  But they allowed operations to be done on multiple files quite quickly and safely. With the advent of glass teletypes, shell scripts simply evaporated -- there was no equivalent.  (yes, there were programs like sed, but it wasn't the same...).  Changing, e.g., a function name oin 10 files got a lot more tedious. With the advent of drag and drop and visual interfaces, shell scripts evaporated as well.   Once again, doing something on 10 files got harder than before.   I still use a lot of shell scripts, but mostly don't write them from scratch any more. What abstraction mechanisms might we add back to Unix to fill these gaps? Steve ----- Original Message ----- From: "Clem Cole" To: "Bakul Shah" Cc: Sent: Thu, 5 Jul 2018 11:23:04 -0400 Subject: Re: [COFF] Other OSes? On Thu, Jul 5, 2018 at 2:40 AM, Bakul Shah wrote: On Jul 4, 2018, at 10:56 PM, Warren Toomey wrote: > > OK, I guess I'll be the one to start things going on the COFF list. > > What other features, ideas etc. were available in other operating > systems which Unix should have picked up but didn't? > > [ Yes, I know, it's on-topic for TUHS but I doubt it will be for long! ] - Capabilities (a number of OSes implemented them -- See Hank Levy's book:   https://homes.cs.washington.edu/~levy/capabook/ [3] - Namespaces (plan9) ​+1 for both​ _______________________________________________ COFF mailing list COFF at minnie.tuhs.org [4] https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff [5] ᐧ Links: ------ [1] mailto:bakul at bitblocks.com [2] mailto:wkt at tuhs.org [3] https://homes.cs.washington.edu/~levy/capabook/ [4] mailto:COFF at minnie.tuhs.org [5] https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at kdbarto.org Fri Jul 6 07:25:38 2018 From: david at kdbarto.org (David) Date: Thu, 5 Jul 2018 14:25:38 -0700 Subject: [COFF] Other OSes? In-Reply-To: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> Message-ID: I find that anything I have to do twice winds up in a shell script, on the assumption that I’ll do it again. So I’ve written a lot of scripts lately, mostly to automate some tools to run our software validation system. These aren’t long (10-15 lines) and I freely give them to others knowing that they will either ignore it (mostly) or use it as is without ever looking at the internals. I think that editor scripts have died out. ed is almost designed for scripting, and with vi, emacs, and other GUI based editors the concept of scripting the editor has died away. Sad, I do remember writing some really hairy ed scripts. A quick read of Hank Levy’s book (thanks for the link) brings back (somewhat bad) memories of working on the iAPX 432. I found the chip interesting in concept, and poorly suited for what the company wanted done. Trying to fold the software to fit the hardware didn’t work out. The project was abandonded. David > On Jul 5, 2018, at 1:49 PM, Steve Johnson wrote: > > That's an interesting topic, but it also gets my mind thinking about UNIX features that were wonderful but didn't evolve as computers did. > > My two examples of this are editor scripts and shell scripts. In the day, I would write at least one shell script and several editor scripts a day. Most of them were 2-4 lines long and used once. But they allowed operations to be done on multiple files quite quickly and safely. > > With the advent of glass teletypes, shell scripts simply evaporated -- there was no equivalent. (yes, there were programs like sed, but it wasn't the same...). Changing, e.g., a function name oin 10 files got a lot more tedious. > > With the advent of drag and drop and visual interfaces, shell scripts evaporated as well. Once again, doing something on 10 files got harder than before. I still use a lot of shell scripts, but mostly don't write them from scratch any more. > > What abstraction mechanisms might we add back to Unix to fill these gaps? > > Steve > > > > ----- Original Message ----- > From: "Clem Cole" > To:"Bakul Shah" > Cc: > Sent:Thu, 5 Jul 2018 11:23:04 -0400 > Subject:Re: [COFF] Other OSes? > > > > > On Thu, Jul 5, 2018 at 2:40 AM, Bakul Shah > wrote: > On Jul 4, 2018, at 10:56 PM, Warren Toomey > wrote: > > > > OK, I guess I'll be the one to start things going on the COFF list. > > > > What other features, ideas etc. were available in other operating > > systems which Unix should have picked up but didn't? > > > > [ Yes, I know, it's on-topic for TUHS but I doubt it will be for long! ] > > - Capabilities (a number of OSes implemented them -- See Hank Levy's book: > https://homes.cs.washington.edu/~levy/capabook/ > - Namespaces (plan9) > > ​+1 for both​ > > > _______________________________________________ > COFF mailing list > COFF at minnie.tuhs.org > https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff > > ᐧ > _______________________________________________ > COFF mailing list > COFF at minnie.tuhs.org > https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralph at inputplus.co.uk Fri Jul 6 08:38:41 2018 From: ralph at inputplus.co.uk (Ralph Corderoy) Date: Thu, 05 Jul 2018 23:38:41 +0100 Subject: [COFF] Other OSes? In-Reply-To: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> Message-ID: <20180705223841.959CC21EC8@orac.inputplus.co.uk> Hi Steve, > With the advent of glass teletypes, [editor] scripts simply evaporated > -- there was no equivalent.  (yes, there were programs like sed, but > it wasn't the same...).  Changing, e.g., a function name oin 10 files > got a lot more tedious. I still write the odd editor script. I interact with ed every day; it's handy when the information for the edit is on the TTY from previous commands and you just want to get in, edit, and w, q. A shell script and ed script I use a lot is ~/bin/rcsanno, a `blame' for RCS files that I have dotted about. RCS because SCCS wasn't available back before CSSC came along. I runs through each revision on the path I'm interested in, e.g. 1.1 to the latest 1.42. It starts with 1.1, but with `1,1:' prepended to each line. For 1.2 onwards it does an rcsdiff(1) with `-e': produce an ed script. This is piped into a brief bit of awk that munges the ed script to prepend the revision to any added or changed line, and then it's fed into ed. Thus the `1.2' that's created has all lines match /^1\.[12]:/. And so on up to 1.42. It's not the quickest way compared to interpreting the RCS `,v' file, but it required no insight into that file format and is `good enough'. I suspect a more interesting question is what did Unix adopt from other OSes that it would have been better without! :-) -- Cheers, Ralph. https://plus.google.com/+RalphCorderoy From ewayte at gmail.com Fri Jul 6 08:51:40 2018 From: ewayte at gmail.com (Eric Wayte) Date: Thu, 5 Jul 2018 18:51:40 -0400 Subject: [COFF] Other OSes? In-Reply-To: References: <20180705055650.GA2170@minnie.tuhs.org> Message-ID: That book brought back memories - I wrote a paper for my graduate computer architecture class on the Intel 432 around 1988. On Thu, Jul 5, 2018 at 2:46 AM Bakul Shah wrote: > On Jul 4, 2018, at 10:56 PM, Warren Toomey wrote: > > > > OK, I guess I'll be the one to start things going on the COFF list. > > > > What other features, ideas etc. were available in other operating > > systems which Unix should have picked up but didn't? > > > > [ Yes, I know, it's on-topic for TUHS but I doubt it will be for long! ] > > - Capabilities (a number of OSes implemented them -- See Hank Levy's book: > https://homes.cs.washington.edu/~levy/capabook/ > - Namespaces (plan9) > > > > > _______________________________________________ > COFF mailing list > COFF at minnie.tuhs.org > https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff > -- Eric Wayte -------------- next part -------------- An HTML attachment was scrubbed... URL: From bakul at bitblocks.com Fri Jul 6 09:11:57 2018 From: bakul at bitblocks.com (Bakul Shah) Date: Thu, 05 Jul 2018 16:11:57 -0700 Subject: [COFF] Other OSes? In-Reply-To: Your message of "Thu, 05 Jul 2018 13:49:58 -0700." <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> Message-ID: <20180705231205.14944156E400@mail.bitblocks.com> On Thu, 05 Jul 2018 13:49:58 -0700 "Steve Johnson" wrote: > That's an interesting topic, but it also gets my mind thinking about UNIX > features that were wonderful but didn't evolve as computers did. > > My two examples of this are editor scripts and shell scripts. In the day, I > would write at least one shell script and several editor scripts a day. Most > of them were 2-4 lines long and used once. But they allowed operations to be > done on multiple files quite quickly and safely. > > With the advent of glass teletypes, shell scripts simply evaporated -- there > was no equivalent. (yes, there were programs like sed, but it wasn't the > same...). Changing, e.g., a function name oin 10 files got a lot more tedious. > > With the advent of drag and drop and visual interfaces, shell scripts > evaporated as well. Once again, doing something on 10 files got harder than > before. I still use a lot of shell scripts, but mostly don't write them from > scratch any more. With specialized apps there is less need for the kind of things we used to do. While some of us want lego technic, most people simply want preconstructed toys to play with. > What abstraction mechanisms might we add back to Unix to fill these gaps? One way to interpret your question is can we build *composable* GUI elements? In specialized application there are GUI based systems that work. e.g. schematic capture, layout, control system design, sound etc. (Though for circuit design HDL is probably far more popular now. Schematic entry is only for board level design.) Even if you designed GUI elements for filters etc. where you can drag and drop files on an input port and attach output to some other input port, the challenge is in how to make it easy to use. Normally this would be unwieldy to use. Unless you used virtaul "NFC" -- where bringing two processing blocks near each other automatically connected their input/output ports. Or you can open a "procesing" window where you can type in your script but it is run until its output is connected somewhere. Or an "selection" window where copies of file/dir can be dragged and dropped. This may actually work well for distributed sytems, which are basically dataflow machines. [Note that control systems are essentially dataflow systems.] This would actually complement something I have been thinking of for constructing/managing dist.systems. Note that there are languages like Scratch & Blockly for kids for programming but to my knowledge nothing at the macro level, where the building blocks are files/dirs/machines, tcp connections and programs). From lm at mcvoy.com Fri Jul 6 10:06:59 2018 From: lm at mcvoy.com (Larry McVoy) Date: Thu, 5 Jul 2018 17:06:59 -0700 Subject: [COFF] Other OSes? In-Reply-To: <20180705231205.14944156E400@mail.bitblocks.com> References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> <20180705231205.14944156E400@mail.bitblocks.com> Message-ID: <20180706000659.GD18361@mcvoy.com> On Thu, Jul 05, 2018 at 04:11:57PM -0700, Bakul Shah wrote: > On Thu, 05 Jul 2018 13:49:58 -0700 "Steve Johnson" wrote: > > That's an interesting topic, but it also gets my mind thinking about UNIX > > features that were wonderful but didn't evolve as computers did. > > > > My two examples of this are editor scripts and shell scripts. In the day, I > > would write at least one shell script and several editor scripts a day. Most > > of them were 2-4 lines long and used once. But they allowed operations to be > > done on multiple files quite quickly and safely. > > > > With the advent of glass teletypes, shell scripts simply evaporated -- there > > was no equivalent. (yes, there were programs like sed, but it wasn't the > > same...). Changing, e.g., a function name oin 10 files got a lot more tedious. > > > > With the advent of drag and drop and visual interfaces, shell scripts > > evaporated as well. Once again, doing something on 10 files got harder than > > before. I still use a lot of shell scripts, but mostly don't write them from > > scratch any more. > > With specialized apps there is less need for the kind of > things we used to do. While some of us want lego technic, > most people simply want preconstructed toys to play with. Years and years ago, decades ago, I worked on a time series picker that had a pretty cool interface. Yeah, it was a GUI tool with all the menus, etc, but it also had a console prompt because all the menus had keyboard shortcuts. What was neat about it was that as you pulled down menus and did stuff, which was a process where you'd go through several things to get what you want, the console would fill in with the shortcuts. So if you hadn't used it for a while, using it basically taught you the shortcuts. It was pretty slick, I wish all guis worked like that. From crossd at gmail.com Fri Jul 6 10:55:38 2018 From: crossd at gmail.com (Dan Cross) Date: Thu, 5 Jul 2018 20:55:38 -0400 Subject: [COFF] Other OSes? In-Reply-To: <20180705055650.GA2170@minnie.tuhs.org> References: <20180705055650.GA2170@minnie.tuhs.org> Message-ID: On Thu, Jul 5, 2018 at 1:56 AM Warren Toomey wrote: > OK, I guess I'll be the one to start things going on the COFF list. > > What other features, ideas etc. were available in other operating > systems which Unix should have picked up but didn't? > > [ Yes, I know, it's on-topic for TUHS but I doubt it will be for long! ] > Ooo, neato. I think about this a lot, because my day job is writing OS kernels. I think it's kind of paradoxical, but Unix went off the rails in not embracing the filesystem enough, and simultaneously embracing it too much. Separate namespaces for things like sockets add unnecessary complexity since you need separate system calls to create these objects because you can't do it through the filesystem. "Everything is a file" ended up being too weak for a lot of interesting applications. Plan 9 fixed that by saying instead, "everything is a filesystem and we have per-process (group) composable namespaces." So now your window system is inherently network-capable and recursive with no special effort. That was great, but it turns out that the simplistic nature of Unix (and plan9) files doesn't fit some applications particularly well: doing scatter/gather is sort of a pain on plan9; fixed on Unix, but with an ugly interface. Modeling objects that are block oriented or transactional on Unix is still something of a hack, and similarly on plan9. Important classes of applications, like relational databases, don't map particularly well to the filesystem (yes, yes, RDBMS servers usually store their data in files, but they end up imposing a substantial layer on top of the filesystem for transactions, indices, etc. In the end, they often end up treating large files like block IO devices, or going straight to the storage layer and working against a raw device). Further, text as a "universal" medium means that every processing stage in a pipeline has to do substantial parsing and validation of input, and semantic information passed between stages is done using an external representation as a side-channel. Of course, often we don't bother with this because we "know" what the data is and we don't need to (I submit that most pipelines are one-offs) and where pipelines are used more formally in large programs, often the data passed is just binary. I had a talk with some plan9 folks (BTL alumni) about this about a year ago in the context of a project at work. I wanted to know why we didn't architect systems like Unix/plan9 anymore: nowadays, everything's hidden behind captive (web) interfaces that aren't particularly general, let alone composable in ad hoc ways. Data stores are highly structured and totally inaccessible. Why? It seemed like a step backwards. The explanation I got was that Unix and plan9 were designed to solve particular problems during specific times; those problems having been solved at those times, things changed and the same problems aren't as important anymore: ergo, the same style of solution wouldn't be appropriate in this brave new world. I'm sure we all have our examples of how file-based problems *are* important, but the rest of the world has moved on, it seems and let's face it: if you're on this list, you're likely rather the exception than the rule. Incidentally, I think one might be able to base a capability system on plan9's namespaces, but it seems like everyone has a different definition of what a capability is. For example, do I have a capability granting me the right to open TCP connections, potentially to anywhere, or do I have a capability that allows me to open a TCP connection to a specific host/port pair? More than one connection? How about a subset of ports or a range or set of IP addresses? It's unclear to me how that's *actually* modeled and it seems to me that you need to interpose some sort of local policy into the process somehow, but that that policy exists outside of the capabilities framework itself. A few more specific things I think would be cool to see in a beyond-Unix OS: 1. Multics-style multi-level security within a process. Systems like CHERI are headed in that direction and Dune and gVisor give many of the benefits, but I've wondered if one could leverage hardware-assisted nested virtualization to get something analogous to Multics-style rings. I imagine it would be slow.... 2. Is mmap() *really* the best we can do for mapping arbitrary resources into an address space? 3. A more generalized message passing system would be cool. Something where you could send a message with a payload somewhere in a synchronous way would be nice (perhaps analogous to channels). VMS-style mailboxes would have been neat. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tytso at mit.edu Fri Jul 6 10:52:46 2018 From: tytso at mit.edu (Theodore Y. Ts'o) Date: Thu, 5 Jul 2018 20:52:46 -0400 Subject: [COFF] Other OSes? In-Reply-To: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> Message-ID: <20180706005246.GA28138@thunk.org> On Thu, Jul 05, 2018 at 01:49:58PM -0700, Steve Johnson wrote: > My two examples of this are editor scripts and shell scripts.   In > the day, I would write at least one shell script and several editor > scripts a day.  Most of them were 2-4 lines long and used once.  But > they allowed operations to be done on multiple files quite quickly and > safely. Before making such generalizations, I think it's wise to state more clearly what you have in mind when you say "shell script". I use shell scripts all the time, but they tend to be for simple things that have control statements, and thus are not 2-4 lines long, and they aren't use-just once sort of things. For example, my "do-suspend script": #!/bin/bash if [[ $EUID -ne 0 ]]; then exec sudo /usr/local/bin/do-suspend fi /usr/bin/xflock4 sleep 0.5 echo deep > /sys/power/mem_sleep exec systemctl suspend I use this many times a day, and shell script was the right choice, as it was simple to write, simple to debug, and it would have been more effort to implement it in almost any other language. (Note that I needed to use /bin/bash instead of a PODIX /bin/sh because I needed access to the effective uid via $EUID.) Your definition of "shell scripts" and "editor scripts" that have disappear seem to be one-off things. For me, those have been replaced by command-line history and command-line editing in the shell, and as far as one-off editor scripts, I use emacs's keyboard macro facility, which is faster (for me) to set up than editor scripts. > With the advent of glass teletypes, shell scripts simply evaporated -- > there was no equivalent.  (yes, there were programs like sed, but it > wasn't the same...).  Changing, e.g., a function name in 10 files > got a lot more tedious. So here's the thing, with emacs's keyboard macros, it's actually not tedious at all. That's why I use them instead of editor scripts! Granted, somewhere starting around 10 files I'll probably end up breaking out some emacs-lisp for forther automation, but for a small number of files, using file-name completionm and then using a handful of control characters per file to kick off the keyboard macros a few hundred times (C-u C-u C-u C-u C-u C-x C-e) ends up being faster to type than consing up a one-off shell or editor script. (Because, basically, an emacs keyboard macro really *is* an editor script!) > What abstraction mechanisms might we add back to Unix to fill these > gaps? What do you mean by "Unix"? Does using /bin/bash for my shell scripts count as Unix? Or only if I strict myself to a strict Unix V9 or POSIX subset? Does using emacs count as "Unix"? What about perl? Or Python? Personally I all consider this "Unix", or at least "Linux", and they are are additoinal tools which, if learned well, are far more powerful than the traditional Unix tools. And the nice thing about Linux largely dominating the "Unix-like" world is for the most part, I don't have to worry about backwards compatibility issues, at leasts for my personal usage. For scripts that I expect other people to use, I still have the reflex of doing things as portability as possible. So for example, consider if you will, mk_cmds and its helper scripts, which is part of a port of Multics's SubSystem facility originated with Kerberos V5, and is now also part of ext2/3/4 userspace utilities: https://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git/tree/lib/ss/mk_cmds.sh.in https://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git/tree/lib/ss/ct_c.sed https://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git/tree/lib/ss/ct_c.awk My implementation of mk_cmds was designed for ultra-portability (as in, it would behave identically on Mac's AUX, IBM's AIX, Solaris (with either a SunSoft or GNU toolchain), OSF/1, Linux, etc.). To do this, the above scripts use a strict subset of POSIX defined syntax and behaviours. And it's a great use of shell, sed, and awk scripts, precisely because they were available everywhere. The original version of mk_cmds was implemented in C, Yacc, and Lex, and the problem was that (a) this got complicated when cross compiling, where the Host != Target architecture, and (b) believe it or not, Yacc and Lex are not all that standardized across different Unix/Linux systems. (Between AT&T's Yacc, Berkeley Yacc, and GNU Bison, portability was a massive and painful headache.) So I solved both problems by taking what previously had been done in C, Yacc, and Lex, and using shell, sed, and awk isntead. :-) - Ted From grog at lemis.com Fri Jul 6 14:04:09 2018 From: grog at lemis.com (Greg 'groggy' Lehey) Date: Fri, 6 Jul 2018 14:04:09 +1000 Subject: [COFF] Other OSes? In-Reply-To: <20180705055650.GA2170@minnie.tuhs.org> References: <20180705055650.GA2170@minnie.tuhs.org> Message-ID: <20180706040409.GC11366@eureka.lemis.com> On Thursday, 5 July 2018 at 15:56:50 +1000, Warren Toomey wrote: > OK, I guess I'll be the one to start things going on the COFF list. > > What other features, ideas etc. were available in other operating > systems which Unix should have picked up but didn't? I came to Unix from Tandem's Guardian OS, which had clearly based on some exposure to Unix, though in many cases it was much more primitive (no hierarchical file system, for example). But it did have one area where I found it significantly superior: interprocess communication. The approach is considerably different from Unix. The system is a loosely coupled multiprocessor system which communicates by message passing, and everything, including file operations, goes via the message system. That makes IPC really the basis of system functionality, and at a user process level messages from other process are read from a special file ($RECEIVE, fd 0 when it's open). It's really difficult to compare with Unix. I've tried several times over the years, and I still haven't come to any conclusion. One problem is that Tandem's userland tools (shell and friends) are far inferior to Unix. And when we tried to shoehorn Unix onto Guardian, we ran into all sorts of conceptual issues. I wrote an overview of Guardian that was published as chapter 8 of Spinellis and Guisios "Beautiful Architecture" (O'Reilly, 2009). If you don't have access, my final draft is at http://www.lemis.com/grog/Books/t16arch.pdf. Comments welcome. Greg -- Sent from my desktop computer. Finger grog at lemis.com for PGP public key. See complete headers for address and phone numbers. This message is digitally signed. If your Microsoft mail program reports problems, please read http://lemis.com/broken-MUA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From bakul at bitblocks.com Fri Jul 6 15:42:55 2018 From: bakul at bitblocks.com (Bakul Shah) Date: Thu, 05 Jul 2018 22:42:55 -0700 Subject: [COFF] Other OSes? In-Reply-To: Your message of "Thu, 05 Jul 2018 20:55:38 -0400." References: <20180705055650.GA2170@minnie.tuhs.org> Message-ID: <20180706054302.72718156E400@mail.bitblocks.com> On Thu, 05 Jul 2018 20:55:38 -0400 Dan Cross wrote: > A few more specific things I think would be cool to see in a beyond-Unix OS: > > 1. Multics-style multi-level security within a process. Systems like CHERI > are headed in that direction and Dune and gVisor give many of the benefits, > but I've wondered if one could leverage hardware-assisted nested > virtualization to get something analogous to Multics-style rings. I imagine > it would be slow.... In traditional machines a protection domain is tightly coupled with a virtual address space. Code in one address space can not touch anything in another address space (unless the same VM object is mapped in both). Except for shared memory mapping, any other communication must be mediated by a kernel. [x86 has call gates but they are not used much if at all] In the Mill arch. a protection domain is decoupled from virtual address space. That is, code in one domain can not directly touch anything in another domain but can call functions in another domain, provided it has the right sort of access rights. Memory can be mapped into multiple domains so once mapped, access becomes cheap. This also means everything can be in the same virtual address space. In traditional systems there is a mode switch when a process makes a supervisor call but this is dangerous (access to everything in kernel mode so people want nested domains). In Mill a thread can traverse through multiple protection domains -- sort of like in the Alpha Real Time Kernel where a thread can traverse through a number of nodes[1] -- and each node in effect is its own protection domain. This means instead of a syscall you can make a shared librar call directly to service running in anothter domain and what this function can access from your domain is very tighly constrained. The need for a privileged kernel completely disappears! Mill ideas are very much worth exploring. It will be possible to build highly secure systems with it -- if it ever gets sufficiently funded and built! IMHO layers of mapping as with virtualization/containerization are not really needed for better security or isolation. > 2. Is mmap() *really* the best we can do for mapping arbitrary resources > into an address space? I think this is fine. Even remote objects mmapping should work! > 3. A more generalized message passing system would be cool. Something where > you could send a message with a payload somewhere in a synchronous way > would be nice (perhaps analogous to channels). VMS-style mailboxes would > have been neat. Erlang. Carl Hewitt's Actor model has this. [1] http://tierra.aslab.upm.es/~sanz/cursos/DRTS/AlphaRtDistributedKernel.pdf From ralph at inputplus.co.uk Fri Jul 6 15:59:30 2018 From: ralph at inputplus.co.uk (Ralph Corderoy) Date: Fri, 06 Jul 2018 06:59:30 +0100 Subject: [COFF] Other OSes? In-Reply-To: <20180706005246.GA28138@thunk.org> References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> <20180706005246.GA28138@thunk.org> Message-ID: <20180706055930.B20471F94A@orac.inputplus.co.uk> Hi Ted, > if [[ $EUID -ne 0 ]]; then ... > (Note that I needed to use /bin/bash instead of a PODIX /bin/sh > because I needed access to the effective uid via $EUID.) id(1) `-u' option? $ ls -l id -rwsr-xr-x 1 31415 31415 38920 Jul 6 06:51 id $ ./id uid=1000(ralph) gid=1000(ralph) euid=31415 groups=1000(ralph),10(wheel),14(uucp),100(users) $ ./id -u 31415 $ ./id -ru 1000 $ -- Cheers, Ralph. https://plus.google.com/+RalphCorderoy From gtaylor at tnetconsulting.net Sat Jul 7 01:38:12 2018 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Fri, 6 Jul 2018 09:38:12 -0600 Subject: [COFF] Other OSes? In-Reply-To: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> Message-ID: <129a13eb-de93-3d6b-b7b5-d0df13e60c87@spamtrap.tnetconsulting.net> On 07/05/2018 02:49 PM, Steve Johnson wrote: > My two examples of this are editor scripts and shell scripts. Will you please elaborate on what you mean by "editor scripts"? That's a term that I'm not familiar with. I feel like you're talking about automating an editor in one way or another. Redirecting standard input into ed (ex, etc) comes to mind. As does vi(m)'s ex command mode. Regular expressions, macros, and the likes start waiving their arms like an anxious student in the back of the class. There's also Vim's vimscript language that is black magic to me. > In the day, I would write at least one shell script and several editor > scripts a day. Most of them were 2-4 lines long and used once. But they > allowed operations to be done on multiple files quite quickly and safely. I too have always written lots of shell scripts. Granted, most of the one off shell scripts are long / nested command lines and not actually script files. At least not until I need to do something for the 2nd or 3rd time. I use the following single line script daily if not hourly. /usr/bin/xsel -ob | /bin/sed 's/^\s*>\{1,\}\s\{1,\}//;s/^\s*//' | /usr/bin/fmt | /bin/sed ':a;N;$!ba;s/\n/ \n/g;s/ \n \n/\n\n/g;s/\n \n/\n\n/g;s/ $//' | /usr/bin/xsel -ib It is derived from a very close variant that works without the leading /^>+\s/ > With the advent of glass teletypes, shell scripts simply evaporated -- > there was no equivalent.  (yes, there were programs like sed, but it > wasn't the same...).  Changing, e.g., a function name oin 10 files got a > lot more tedious. I don't understand that at all. How did glass ttys (screens) change what people do / did on unix? Granted, there is more output history with a print terminal. There are times that I'd like more of that, particularly when my terminal's history buffer isn't deep enough for one reason or another. (I really should raise that higher than 10k lines.) What could / would you do at a shell prompt pre-glass-TTYs that you can't do the same now with glass-TTYs? I must need more caffeine as I'm not understanding the difference. > With the advent of drag and drop and visual interfaces, shell scripts > evaporated as well. Why do you say that? If anything, GUIs caused me to use more shell scripts. Rather, things that I used to do as a quick one off are now saved in a shell script that I'll call with the path to the file(s) that I want to act on. On Windows I'd write batch files that worked by accepting file(s) being drug and dropped onto them. That way I could select files, drag them to a short cut and have the script run on the files that I had visually selected. (I've not felt the need to do similar in unix.) > Once again, doing something on 10 files got harder than before. Why did it get harder? Could you no longer still use the older shell based method? Or are you saying that there was no good GUI counterpart for what you used to do in shell? > I still use a lot of shell scripts, but mostly don't write them from > scratch any more. I too use a lot of shell scripts. Many of them evolve from ad-hock command lines that have grown more complex or has been needed for the 3rd time. > What abstraction mechanisms might we add back to Unix to fill these gaps? I don't know what I'd add back as I feel like they are still there. I also think that the CLI is EXTREMELY dynamic and EXTREMELY flexible. I have zero idea how to provide a point and click GUI interface that is as dynamic or flexible. -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3982 bytes Desc: S/MIME Cryptographic Signature URL: From gtaylor at tnetconsulting.net Sat Jul 7 01:42:11 2018 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Fri, 6 Jul 2018 09:42:11 -0600 Subject: [COFF] Other OSes? In-Reply-To: References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> Message-ID: <25501ef6-2f28-426d-6f63-f171c8f61db1@spamtrap.tnetconsulting.net> On 07/05/2018 03:25 PM, David wrote: > ed is almost designed for scripting, and with vi, emacs, and other GUI > based editors the concept of scripting the editor has died away. I don't think I would have concluded that ed was designed for scripting per say. I do think it's very conducive to it. I also think that ex (vi(m)'s command mode is similar. Vim also has an extensive vimscript language that I see a LOT of very fancy things in. I also wonder if Visual Basic Scripts (.vbs) (macros) inside of the various Microsoft Office GUI apps might qualify here. I think that scripts are less often used in editors than they once were. -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3982 bytes Desc: S/MIME Cryptographic Signature URL: From gtaylor at tnetconsulting.net Sat Jul 7 01:49:26 2018 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Fri, 6 Jul 2018 09:49:26 -0600 Subject: [COFF] Other OSes? In-Reply-To: <20180706000659.GD18361@mcvoy.com> References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> <20180705231205.14944156E400@mail.bitblocks.com> <20180706000659.GD18361@mcvoy.com> Message-ID: On 07/05/2018 06:06 PM, Larry McVoy wrote: > So if you hadn't used it for a while, using it basically taught you > the shortcuts. It was pretty slick, I wish all guis worked like that. "smit(ty)" from IBM's AIX comes to mind. It provides a nice curses based menu interface / form to fill in information /and/ shows the underlying OS command(s) that will be executed. I have always thought this was fairly unique (at least I'm ignorant of anything else like it) and quite helpful. -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3982 bytes Desc: S/MIME Cryptographic Signature URL: From gtaylor at tnetconsulting.net Sat Jul 7 01:57:40 2018 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Fri, 6 Jul 2018 09:57:40 -0600 Subject: [COFF] Other OSes? In-Reply-To: <20180706005246.GA28138@thunk.org> References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> <20180706005246.GA28138@thunk.org> Message-ID: On 07/05/2018 06:52 PM, Theodore Y. Ts'o wrote: > Granted, somewhere starting around 10 files I'll probably end up breaking > out some emacs-lisp for forther automation, but for a small number of > files, using file-name completionm and then using a handful of control > characters per file to kick off the keyboard macros a few hundred times > (C-u C-u C-u C-u C-u C-x C-e) ends up being faster to type than consing > up a one-off shell or editor script. (Because, basically, an emacs > keyboard macro really*is* an editor script!) I've not needed to do something like this very often. The few times that I've needed to do it have usually involved (shell) scripts, likely calling sed and / or awk, possibly with their associated files. (Much like it looks like was done for mk_cmds.) I would also consider using Vim's bufdo command across multiple buffers (""open files). Depending on the complexity, I'd likely define a macro (if not an all out Vimscript), and execute said macro / Vimscript via :buffdo. I had assumed that emacs had similar functionality. -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3982 bytes Desc: S/MIME Cryptographic Signature URL: From gtaylor at tnetconsulting.net Sat Jul 7 01:59:43 2018 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Fri, 6 Jul 2018 09:59:43 -0600 Subject: [COFF] Other OSes? In-Reply-To: <20180706055930.B20471F94A@orac.inputplus.co.uk> References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> <20180706005246.GA28138@thunk.org> <20180706055930.B20471F94A@orac.inputplus.co.uk> Message-ID: <02ced8ba-fa5a-56f4-fe4b-daf4e5b7584e@spamtrap.tnetconsulting.net> On 07/05/2018 11:59 PM, Ralph Corderoy wrote: > id(1) `-u' option? Ah, "(constructive) criticism". The joys of sharing what one does with a community and getting all the alternative options / "improvements" / suggestion responses. :-) I think this is a wonderful way to learn. :-D -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3982 bytes Desc: S/MIME Cryptographic Signature URL: From gtaylor at tnetconsulting.net Sat Jul 7 02:10:24 2018 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Fri, 6 Jul 2018 10:10:24 -0600 Subject: [COFF] Other OSes? In-Reply-To: <20180706040409.GC11366@eureka.lemis.com> References: <20180705055650.GA2170@minnie.tuhs.org> <20180706040409.GC11366@eureka.lemis.com> Message-ID: <9f88f490-6d7f-e12d-bde2-e3410acd7a65@spamtrap.tnetconsulting.net> On 07/05/2018 10:04 PM, Greg 'groggy' Lehey wrote: > It's really difficult to compare with Unix. I've tried several times > over the years, and I still haven't come to any conclusion. The impression that I got while reading your description made me think of distributed systems that use message bus(es) to communicate between applications on different distributed systems. Or at least the same distributed IPC idea, just between parts of a single system, no (what is typically TCP/IP over Ethernet) network between them. Does that even come close? Or have I completely missed the boat? -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3982 bytes Desc: S/MIME Cryptographic Signature URL: From ralph at inputplus.co.uk Sat Jul 7 02:10:12 2018 From: ralph at inputplus.co.uk (Ralph Corderoy) Date: Fri, 06 Jul 2018 17:10:12 +0100 Subject: [COFF] Other OSes? In-Reply-To: <02ced8ba-fa5a-56f4-fe4b-daf4e5b7584e@spamtrap.tnetconsulting.net> References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> <20180706005246.GA28138@thunk.org> <20180706055930.B20471F94A@orac.inputplus.co.uk> <02ced8ba-fa5a-56f4-fe4b-daf4e5b7584e@spamtrap.tnetconsulting.net> Message-ID: <20180706161012.0CD7A21C63@orac.inputplus.co.uk> Hi Grant, > > id(1) `-u' option? ... > I think this is a wonderful way to learn. :-D Like Ted, I favour sh over bash for scripts when possible, and was just chipping in to help increase sh's number. :-) $ strace -c dash -c : |& sed -n '1p;$p' % time seconds usecs/call calls errors syscall 100.00 0.000883 35 1 total $ strace -c bash -c : |& sed -n '1p;$p' % time seconds usecs/call calls errors syscall 100.00 0.002943 145 8 total $ -- Cheers, Ralph. https://plus.google.com/+RalphCorderoy From gtaylor at tnetconsulting.net Sat Jul 7 02:47:15 2018 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Fri, 6 Jul 2018 10:47:15 -0600 Subject: [COFF] Other OSes? In-Reply-To: <20180706161012.0CD7A21C63@orac.inputplus.co.uk> References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> <20180706005246.GA28138@thunk.org> <20180706055930.B20471F94A@orac.inputplus.co.uk> <02ced8ba-fa5a-56f4-fe4b-daf4e5b7584e@spamtrap.tnetconsulting.net> <20180706161012.0CD7A21C63@orac.inputplus.co.uk> Message-ID: <1cfd6b25-dffb-0f8d-c31a-4302af8e4073@spamtrap.tnetconsulting.net> On 07/06/2018 10:10 AM, Ralph Corderoy wrote: > Hi Grant, Hi Ralph, > Like Ted, I favour sh over bash for scripts when possible, and was just > chipping in to help increase sh's number. :-) To each his / her own preferences. I do like the feedback and exposure to alternative solutions. > $ strace -c dash -c : |& sed -n '1p;$p' > % time seconds usecs/call calls errors syscall > 100.00 0.000883 35 1 total > $ strace -c bash -c : |& sed -n '1p;$p' > % time seconds usecs/call calls errors syscall > 100.00 0.002943 145 8 total I like the info. I'm not at all surprised that dash has fewer calls than bash. $ strace -c zsh -c : |& sed -n '1p;$p' % time seconds usecs/call calls errors syscall 100.00 0.000017 271 15 total Oh my. Zsh is even fatter (call heavy) than Bash. -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3982 bytes Desc: S/MIME Cryptographic Signature URL: From scj at yaccman.com Sat Jul 7 04:27:06 2018 From: scj at yaccman.com (Steve Johnson) Date: Fri, 06 Jul 2018 11:27:06 -0700 Subject: [COFF] Editor Scripts In-Reply-To: <9f88f490-6d7f-e12d-bde2-e3410acd7a65@spamtrap.tnetconsulting.net> Message-ID: <9dadecbf12b9f0d8debe8fd7c8984401766fbb0a@webmail.yaccman.com> What I used editor scripts for before utilities like sed came along was primarily in what would now be called refactoring. A common pattern for me was fo have a function foo that  took two arguments and I wanted to add another argument. Recall in those days that the arguments to a function had no type information   foo( x, y ) would be followed by int x in the body of the function. Also, htere were no function prototypes.   So a very common error (and a major motivation for writing Lint) was finding functions that were called inconsistently across the program.  If I wanted to add an argument to foo, the first thing I would do is run an editor script, something like     1,$s/foo/old_foo/g      w      q and apply it to all my source and header files.   Then I'd verify that the resulting files compiled and ran. Then I would take the definition of foo (now called old_foo), and change it to its new form, with the extra arguments, and rename it to foo.   Then I would grep my source files for old_foo and change the calls as needed.   Compiling would find any that I missed, so I could fix them. Sounds like a lot of work, but if you did it nearly every day it went smoothly. ed lent itself to scripts.  It was all ascii and line oriented.   With glass teletypes came editors like vi.  Suddenly, a major fraction of the letters typed were involved in moving a cursor alone.  Also, vi wasn't as good at doing regular expressions as ed, especially when lines were being joined   (this was later fixed).  So editor scripts went from daily  usage to something that felt increasingly alien.   The fact that you ccould see the code being changed was good, but find the lines to change across a couple of dozen files was much more time consuming... Steve PS:  There are IDEs that make quickly finding the definitions of a function from its uses, or vice versa, much easier now.  But I think it falls short of being an abstraction mechanism the way editor scripts were...  In particular, you can't put such mouse clicks into a file and run them on a bunch of tiles... ----- Original Message ----- From: "Grant Taylor" To: Cc: Sent:Fri, 6 Jul 2018 10:10:24 -0600 Subject:Re: [COFF] Other OSes? On 07/05/2018 10:04 PM, Greg 'groggy' Lehey wrote: > It's really difficult to compare with Unix. I've tried several times > over the years, and I still haven't come to any conclusion. The impression that I got while reading your description made me think of distributed systems that use message bus(es) to communicate between applications on different distributed systems. Or at least the same distributed IPC idea, just between parts of a single system, no (what is typically TCP/IP over Ethernet) network between them. Does that even come close? Or have I completely missed the boat? -- Grant. . . . unix || die -------------- next part -------------- An HTML attachment was scrubbed... URL: From gtaylor at tnetconsulting.net Sat Jul 7 05:04:28 2018 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Fri, 6 Jul 2018 13:04:28 -0600 Subject: [COFF] Editor Scripts In-Reply-To: <9dadecbf12b9f0d8debe8fd7c8984401766fbb0a@webmail.yaccman.com> References: <9dadecbf12b9f0d8debe8fd7c8984401766fbb0a@webmail.yaccman.com> Message-ID: <274a9a62-6c27-376a-a74d-6dd2e3268ae9@spamtrap.tnetconsulting.net> On 07/06/2018 12:27 PM, Steve Johnson wrote: > What I used editor scripts for before utilities like sed came along was > primarily in what would now be called refactoring. Okay. I think you just touched on at least one thing that predates my unix experience. sed has always existed for me. (I've only been doing unix things for about 20 years.) I would think that what I do in sed could (still) fairly easily be done by scripting ed. As I type this, I realize that it might be possible (period, not just easier) to do something with ed scripts than with sed. (See the "Pass variable into sed program" thread from early May this year in the alt.comp.lang.shell.unix.bourne-bash newsgroup.) > A common pattern for me was fo have a function foo that  took two > arguments and I wanted to add another argument. Okay. > Recall in those days that the arguments to a function had no type > information   foo( x, y ) would be followed by int x in the body of the > function. I'm ignorant of most of that, both from timing and the fact that I do exceptionally little programming. > Also, htere were no function prototypes.   So a very common error (and a > major motivation for writing Lint) was finding functions that were > called inconsistently across the program. Okay. I can see how that would be problematic and give rise for tools to help avoid said problem. > If I wanted to add an argument to foo, the first thing I would do is run > an editor script, something like > >     1,$s/foo/old_foo/g >      w >      q > > and apply it to all my source and header files. I would think that the exact commands could be run via command mode (thus bufdo) or as a script redirected into ex. I would also seriously consider sed (which you say didn't exist at the time) across the proper files (with shell globing for selection). > Then I'd verify that the resulting files compiled and ran. Fair. That makes perfect sense. I've done similar myself. (All be it rarely.) > Then I would take the definition of foo (now called old_foo), and change > it to its new form, with the extra arguments, and rename it to foo. I assume you'd use a similar editor script with a different regular expression and / or set of actions. As I type this I can't think of the exact actions I'd use. I'd probably search for the declaration and substitute the old declaration with the new declaration, go down the known number of lines, add the next int z line. Then I'd probably substitute the old invocation with the new invocation across all lines (1,$ or % in vi). I think I'd need a corpus of data to test against / refine. > Then I would grep my source files for old_foo and change the calls > as needed. Compiling would find any that I missed, so I could fix them. *nod* > Sounds like a lot of work, but if you did it nearly every day it went > smoothly. I don't think it is a lot of work. I think it sounds like a codification of the process that I would go through mentally. I've done similar with a lot of other things (primarily config files), sometimes across multiple systems via remote ssh commands. I think such methodology works quite well. It does take a mindset to do it. But that's part of the learning curve. > ed lent itself to scripts.  It was all ascii and line oriented. Fair. > With glass teletypes came editors like vi. Suddenly, a major fraction > of the letters typed were involved in moving a cursor alone. Hum. I hadn't thought about that. I personally haven't done much in ed, so I never had the comparison. But it does make sense. I do have to ask, why does the evolution of vi / vim / emacs / etc preclude you from continuing to use ed the way that you historically used it? I follow a number of people on Twitter that still prefer ed. I suspect for some of the reasons that you are mentioning. > Also, vi wasn't as good at doing regular expressions as ed, especially > when lines were being joined (this was later fixed). That's all new news to me. I've been using vim my entire unix career. I was never exposed to the issues that you mention. > So editor scripts went from daily usage to something that felt > increasingly alien. I guess they don't seem alien to me because I use commands in vim's command mode weekly (if not daily) that seem to be the exact same thing. (Adjusting for vim's syntax vs other similar utilities.) So putting the commands in a file vs typing them on the ex command line makes little difference to me. I'm just doing them interactively. I also feel confident that I could move the commands to their own file that is sourced if I wanted to. Thus I feel like they are still here with us. > The fact that you ccould see the code being changed was good, but find > the lines to change across a couple of dozen files was much more time > consuming... Ya. I can see a real advantage of just having the operation happen without re-displaying (printing) could be beneficial. Aside: I'm reminded of a time when I edited a big (for the time) file (< 1MB) with edit on a 386. I had it do a search for a shorter string and replace it with a longer string. I could literally watch as it brought the line onto (usually the top of) the screen, (re)drawing the rest of the screen, do the substitution, which caused bumping subsequent text, which meant redrawing the remainder of the screen, then finding the next occurrence on screen, and repeating. I ended up walking away as that operation couldn't be canceled and took about 20 minutes to run. Oy vey. > PS:  There are IDEs that make quickly finding the definitions of a > function from its uses, or vice versa, much easier now. I think that it's highly contextually sensitive and really a sub-set of what scripts can do. > But I think it falls short of being an abstraction mechanism the way > editor scripts were... Agreed. Once you know the process that's being done, you can alter it to work for any other thing that you want to do. > In particular, you can't put such mouse clicks into a file and run them > on a bunch of tiles... Oh ... I'm fairly certain that there are ways to script mouse clicks. But that's an entirely different level of annoyance. One of which usually requires things to retain focus or at least not be covered by other windows. -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3982 bytes Desc: S/MIME Cryptographic Signature URL: From gtaylor at tnetconsulting.net Sun Jul 8 15:17:43 2018 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Sat, 7 Jul 2018 23:17:43 -0600 Subject: [COFF] Today I learned something new about File Transfer Protocol. Message-ID: <4d26a222-b3d8-eb17-3d61-4d2c1d69fefb@spamtrap.tnetconsulting.net> Today I learned that File Transfer Protocol supports transferring data between two remote systems without passing the data through a controlling host. The end of § 2.3, The FTP Model, of RFC 959, File Transfer Protocol, states the following: """ In another situation a user might wish to transfer files between two hosts, neither of which is a local host. The user sets up control connections to the two servers and then arranges for a data connection between them. In this manner, control information is passed to the user-PI but data is transferred between the server data transfer processes. Following is a model of this server-server interaction. Control ------------ Control ---------->| User-FTP |<----------- | | User-PI | | | | "C" | | V ------------ V -------------- -------------- | Server-FTP | Data Connection | Server-FTP | | "A" |<---------------------->| "B" | -------------- Port (A) Port (B) -------------- """ I also learned that FTP uses (a subset of) the Telnet protocol for it's control connection. Yet another reason to dislike it. (I strongly prefer 8-bit clean connections and dislike things that need special handling.) -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3982 bytes Desc: S/MIME Cryptographic Signature URL: From perry at piermont.com Mon Jul 9 06:28:33 2018 From: perry at piermont.com (Perry E. Metzger) Date: Sun, 8 Jul 2018 16:28:33 -0400 Subject: [COFF] Today I learned something new about File Transfer Protocol. In-Reply-To: <4d26a222-b3d8-eb17-3d61-4d2c1d69fefb@spamtrap.tnetconsulting.net> References: <4d26a222-b3d8-eb17-3d61-4d2c1d69fefb@spamtrap.tnetconsulting.net> Message-ID: <20180708162833.2de9eca0@jabberwock.cb.piermont.com> On Sat, 7 Jul 2018 23:17:43 -0600 Grant Taylor via COFF wrote: > Today I learned that File Transfer Protocol supports transferring > data between two remote systems without passing the data through a > controlling host. Not only does it permit that, but this mechanism was fascinatingly abused a few times in the early 1990s as a means to enable security policy violations. :) > I also learned that FTP uses (a subset of) the Telnet protocol for > it's control connection. Yet another reason to dislike it. (I > strongly prefer 8-bit clean connections and dislike things that > need special handling.) In the old old days, FTP had all sorts of other functions, including mail transfer. (SMTP obsoleted that by 1980 or so.) Perry -- Perry E. Metzger perry at piermont.com From perry at piermont.com Mon Jul 9 06:31:50 2018 From: perry at piermont.com (Perry E. Metzger) Date: Sun, 8 Jul 2018 16:31:50 -0400 Subject: [COFF] Other OSes? In-Reply-To: References: <20180705055650.GA2170@minnie.tuhs.org> Message-ID: <20180708163150.0c9e1870@jabberwock.cb.piermont.com> On Jul 4, 2018, at 10:56 PM, Warren Toomey wrote: > > OK, I guess I'll be the one to start things going on the COFF > list. > > What other features, ideas etc. were available in other operating > systems which Unix should have picked up but didn't? A fascinating topic! On Wed, 4 Jul 2018 23:40:36 -0700 Bakul Shah wrote: > - Capabilities (a number of OSes implemented them -- See Hank > Levy's book: https://homes.cs.washington.edu/~levy/capabook/ A fascinating resurrection of capabilities in a kind of pleasant Unixy-way can be found in Robert Watson's "Capsicum" add-ons for FreeBSD and Linux. I wish it was more widely known about and adopted. > - Namespaces (plan9) A good choice I think. Perry -- Perry E. Metzger perry at piermont.com From perry at piermont.com Mon Jul 9 06:50:06 2018 From: perry at piermont.com (Perry E. Metzger) Date: Sun, 8 Jul 2018 16:50:06 -0400 Subject: [COFF] Other OSes? In-Reply-To: <20180705055650.GA2170@minnie.tuhs.org> References: <20180705055650.GA2170@minnie.tuhs.org> Message-ID: <20180708165006.21a7429e@jabberwock.cb.piermont.com> On Thu, 5 Jul 2018 15:56:50 +1000 Warren Toomey wrote: > OK, I guess I'll be the one to start things going on the COFF list. > > What other features, ideas etc. were available in other operating > systems which Unix should have picked up but didn't? > > [ Yes, I know, it's on-topic for TUHS but I doubt it will be for > long! ] A minor feature that I might mention: TOPS-20 CMND JSYS style command completion. TL;DR, this feature could now be implemented, as after decades of wanting it I finally know how to do it in a unixy way. In TOPS-20, any time you were at the EXEC (the equivalent of the shell), you could hit "?" and the thing would tell you what options there were for the next thing you could type, and you could hit ESC to complete the current thing. This was Very Very Nice, as flags and options to programs were all easily discoverable and you had a handy reminder mechanism when you forgot what you wanted. bash has some vague lame equivalents of this (it will complete filenames if you hit tab etc.), and if you write special scripts you can add domain knowledge into bash of particular programs to allow for special application-specific completion, but overall it's kind of lame. Here's the Correct Way to implement this: have programs implement a special flag that allows them to tell the shell how to do completion for them! I got this idea from this feature being hacked in, in an ad hoc way, into clang: http://blog.llvm.org/2017/09/clang-bash-better-auto-completion-is.html but it is apparent that with a bit of work, one could standardize such a feature and allow nearly any program to provide the shell with such information, which would be very cool. Best of all, it's still unixy in spirit (IMHO). Kudos to the LLVM people for figuring out the right way to do this. I'd been noodling on it since maybe 1984 without any real success. Perry -- Perry E. Metzger perry at piermont.com From perry at piermont.com Mon Jul 9 06:53:17 2018 From: perry at piermont.com (Perry E. Metzger) Date: Sun, 8 Jul 2018 16:53:17 -0400 Subject: [COFF] Other OSes? In-Reply-To: <20180708163150.0c9e1870@jabberwock.cb.piermont.com> References: <20180705055650.GA2170@minnie.tuhs.org> <20180708163150.0c9e1870@jabberwock.cb.piermont.com> Message-ID: <20180708165317.280ebe34@jabberwock.cb.piermont.com> On Sun, 8 Jul 2018 16:31:50 -0400 "Perry E. Metzger" wrote: > On Wed, 4 Jul 2018 23:40:36 -0700 Bakul Shah > wrote: > > - Capabilities (a number of OSes implemented them -- See Hank > > Levy's book: https://homes.cs.washington.edu/~levy/capabook/ > > A fascinating resurrection of capabilities in a kind of pleasant > Unixy-way can be found in Robert Watson's "Capsicum" add-ons for > FreeBSD and Linux. I wish it was more widely known about and > adopted. Thought I might mention where to find more information on that: Paper: https://www.usenix.org/legacy/event/sec10/tech/full_papers/Watson.pdf Presentation at Usenix Security: https://www.youtube.com/watch?v=raNx9L4VH2k Web page: https://www.cl.cam.ac.uk/research/security/capsicum/documentation.html -- Perry E. Metzger perry at piermont.com From bakul at bitblocks.com Mon Jul 9 09:27:54 2018 From: bakul at bitblocks.com (Bakul Shah) Date: Sun, 8 Jul 2018 16:27:54 -0700 Subject: [COFF] Other OSes? In-Reply-To: <20180708165006.21a7429e@jabberwock.cb.piermont.com> References: <20180705055650.GA2170@minnie.tuhs.org> <20180708165006.21a7429e@jabberwock.cb.piermont.com> Message-ID: On Jul 8, 2018, at 1:50 PM, Perry E. Metzger wrote: > > On Thu, 5 Jul 2018 15:56:50 +1000 Warren Toomey wrote: >> OK, I guess I'll be the one to start things going on the COFF list. >> >> What other features, ideas etc. were available in other operating >> systems which Unix should have picked up but didn't? >> >> [ Yes, I know, it's on-topic for TUHS but I doubt it will be for >> long! ] > > A minor feature that I might mention: TOPS-20 CMND JSYS style command > completion. TL;DR, this feature could now be implemented, as after > decades of wanting it I finally know how to do it in a unixy way. > > In TOPS-20, any time you were at the EXEC (the equivalent of the > shell), you could hit "?" and the thing would tell you what options > there were for the next thing you could type, and you could hit ESC to > complete the current thing. This was Very Very Nice, as flags and > options to programs were all easily discoverable and you had a handy > reminder mechanism when you forgot what you wanted. > > bash has some vague lame equivalents of this (it will complete > filenames if you hit tab etc.), and if you write special scripts you > can add domain knowledge into bash of particular programs to allow for > special application-specific completion, but overall it's kind of lame. > > Here's the Correct Way to implement this: have programs implement a > special flag that allows them to tell the shell how to do completion > for them! I got this idea from this feature being hacked in, in an ad > hoc way, into clang: > > http://blog.llvm.org/2017/09/clang-bash-better-auto-completion-is.html > > but it is apparent that with a bit of work, one could standardize such > a feature and allow nearly any program to provide the shell with such > information, which would be very cool. Best of all, it's still unixy > in spirit (IMHO). I believe autocompletion has been available for 20+ years. IIRC, I switched to zsh in 1995 and it has had autocompletion then. But you do have to teach zsh/bash how to autocomplete for a given program. For instance compctl -K listsysctls sysctl listsysctls() { set -A reply $(sysctl -AN ${1%.*}) } The compctl tells the shell what keyword list to use (lowercase k) or command to use to generate such a list (uppercase K). Then the command has to figure out how to generate such a list given a prefix. This sort of magic incantation is needed because no one has bothered to create a simple library for autocompletion & no standard convention has sprung up that a program can use. It is not entirely trivial but not difficult either. Cisco's CLI is a great model for this. It would even prompt you with a help string for non-keyword args such as ip address or host! With Ciscso CLI ^D to list choices, ^I to try auto complete and ? to provide context sensitive help. It was even better in that you can get away with just typing a unique prefix. No need to hit . Very handy for interactive use. From grog at lemis.com Mon Jul 9 10:00:46 2018 From: grog at lemis.com (Greg 'groggy' Lehey) Date: Mon, 9 Jul 2018 10:00:46 +1000 Subject: [COFF] Other OSes? In-Reply-To: References: <20180705055650.GA2170@minnie.tuhs.org> <20180708165006.21a7429e@jabberwock.cb.piermont.com> Message-ID: <20180709000046.GD11366@eureka.lemis.com> On Sunday, 8 July 2018 at 16:27:54 -0700, Bakul Shah wrote: > I believe autocompletion has been available for 20+ years. Yes, I started using bash in 1990, and it had autocompletion then. A colleague was using it even earlier, and pointed out to me autocompletion and Emacs-style editing as one of the great advantages. Greg -- Sent from my desktop computer. Finger grog at lemis.com for PGP public key. See complete headers for address and phone numbers. This message is digitally signed. If your Microsoft mail program reports problems, please read http://lemis.com/broken-MUA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From crossd at gmail.com Mon Jul 9 10:05:29 2018 From: crossd at gmail.com (Dan Cross) Date: Sun, 8 Jul 2018 20:05:29 -0400 Subject: [COFF] Other OSes? In-Reply-To: References: <20180705055650.GA2170@minnie.tuhs.org> <20180708165006.21a7429e@jabberwock.cb.piermont.com> Message-ID: On Sun, Jul 8, 2018 at 7:28 PM Bakul Shah wrote: > [...] > I believe autocompletion has been available for 20+ years. IIRC, I > switched to zsh in 1995 and it has had autocompletion then. But you > do have to teach zsh/bash how to autocomplete for a given program. > csh has had filename auto-completion since the late 70s or early 80s, though nowhere as rich or as full-featured as bash/zsh, let alone TOPS-20. [...] > > This sort of magic incantation is needed because no one has bothered > to create a simple library for autocompletion & no standard convention > has sprung up that a program can use. It is not entirely trivial but > not difficult either. This. Much of the issue with Unix was convention, or rather, lack of a consistent convention. Proponents of DEC operating systems that I've known decry that Unix can't do stuff like, `RENAME *.FTN *.FOR`, because the shell does wildcard expansion. Unix people I know will retort that that behavior depends on programming against specific libraries. I think the big difference is that the DEC systems really encouraged using those libraries and made their use idiomatic and trivial; it was such a common convention that NOT doing it that way was extraordinary. On the other hand, Unix never mandated any specific way to do these things; as a result, everyone their own thing. How many times have you looked at a Unix utility of, er, a certain age, and seen `main()` start something like: main(argc, argv) int argc; char *argv[]; { if (--argc > 0 && *argv[1] == '-') { argv++; while (*++*argv) switch (**argv) { case 'a': /* etc.... */ continue; } } /* And so on.... */ I mean, goodness: we didn't even use getopt(3)! It was all hand-rolled! And thus inconsistent. Cisco's CLI is a great model for this. It would > even prompt you with a help string for non-keyword args such as ip > address or host! With Ciscso CLI ^D to list choices, ^I to try auto > complete and ? to provide context sensitive help. It was even better > in that you can get away with just typing a unique prefix. No need to > hit . Very handy for interactive use. This is unsurprising as the Cisco CLI is very clearly modeled after TOPS-20/TENEX which did all of those things (much of which was in the CMND JSYS after work transferred to DEC and TENEX became TOPS-20). It's definitely one of the cooler features of twenex. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From perry at piermont.com Mon Jul 9 10:11:51 2018 From: perry at piermont.com (Perry E. Metzger) Date: Sun, 8 Jul 2018 20:11:51 -0400 Subject: [COFF] Other OSes? In-Reply-To: References: <20180705055650.GA2170@minnie.tuhs.org> <20180708165006.21a7429e@jabberwock.cb.piermont.com> Message-ID: <20180708201151.03aa46c0@jabberwock.cb.piermont.com> On Sun, 8 Jul 2018 16:27:54 -0700 Bakul Shah wrote: > On Jul 8, 2018, at 1:50 PM, Perry E. Metzger > wrote: > > > > On Thu, 5 Jul 2018 15:56:50 +1000 Warren Toomey > > wrote: > >> OK, I guess I'll be the one to start things going on the COFF > >> list. > >> > >> What other features, ideas etc. were available in other operating > >> systems which Unix should have picked up but didn't? > >> > >> [ Yes, I know, it's on-topic for TUHS but I doubt it will be for > >> long! ] > > > > A minor feature that I might mention: TOPS-20 CMND JSYS style > > command completion. TL;DR, this feature could now be implemented, > > as after decades of wanting it I finally know how to do it in a > > unixy way. > > > > In TOPS-20, any time you were at the EXEC (the equivalent of the > > shell), you could hit "?" and the thing would tell you what > > options there were for the next thing you could type, and you > > could hit ESC to complete the current thing. This was Very Very > > Nice, as flags and options to programs were all easily > > discoverable and you had a handy reminder mechanism when you > > forgot what you wanted. > > > > bash has some vague lame equivalents of this (it will complete > > filenames if you hit tab etc.), and if you write special scripts > > you can add domain knowledge into bash of particular programs to > > allow for special application-specific completion, but overall > > it's kind of lame. > > > > Here's the Correct Way to implement this: have programs implement > > a special flag that allows them to tell the shell how to do > > completion for them! I got this idea from this feature being > > hacked in, in an ad hoc way, into clang: > > > > http://blog.llvm.org/2017/09/clang-bash-better-auto-completion-is.html > > > > but it is apparent that with a bit of work, one could standardize > > such a feature and allow nearly any program to provide the shell > > with such information, which would be very cool. Best of all, > > it's still unixy in spirit (IMHO). > > I believe autocompletion has been available for 20+ years. IIRC, I > switched to zsh in 1995 and it has had autocompletion then. But you > do have to teach zsh/bash how to autocomplete for a given program. > For instance Yes, that's the point. You have to write it for the programs and the programs have no way to convey to the shell what they want. this fixes that. > This sort of magic incantation is needed because no one has bothered > to create a simple library for autocompletion & no standard > convention has sprung up that a program can use. Yes, I know. That's exactly what I'm explaining. Read the URL above. It describes a quite general mechanism for a program to convey to the shell, without needing any special binary support, how autocompletion should work. Perry -- Perry E. Metzger perry at piermont.com From perry at piermont.com Mon Jul 9 10:13:09 2018 From: perry at piermont.com (Perry E. Metzger) Date: Sun, 8 Jul 2018 20:13:09 -0400 Subject: [COFF] Other OSes? In-Reply-To: <20180709000046.GD11366@eureka.lemis.com> References: <20180705055650.GA2170@minnie.tuhs.org> <20180708165006.21a7429e@jabberwock.cb.piermont.com> <20180709000046.GD11366@eureka.lemis.com> Message-ID: <20180708201309.2c4f85f6@jabberwock.cb.piermont.com> On Mon, 9 Jul 2018 10:00:46 +1000 Greg 'groggy' Lehey wrote: > On Sunday, 8 July 2018 at 16:27:54 -0700, Bakul Shah wrote: > > I believe autocompletion has been available for 20+ years. > > Yes, I started using bash in 1990, and it had autocompletion then. > A colleague was using it even earlier, and pointed out to me > autocompletion and Emacs-style editing as one of the great > advantages. Yes, I know. But it only autocompletes file names, not program flags and arguments etc., unless you write special bash scripts for every single program examining what its arguments are. The point of what I wrote is that there is now a good idea about how programs can convey their arguments to the shell, thus allowing a general fix to the issue. -- Perry E. Metzger perry at piermont.com From crossd at gmail.com Mon Jul 9 10:19:18 2018 From: crossd at gmail.com (Dan Cross) Date: Sun, 8 Jul 2018 20:19:18 -0400 Subject: [COFF] Other OSes? In-Reply-To: <20180708201151.03aa46c0@jabberwock.cb.piermont.com> References: <20180705055650.GA2170@minnie.tuhs.org> <20180708165006.21a7429e@jabberwock.cb.piermont.com> <20180708201151.03aa46c0@jabberwock.cb.piermont.com> Message-ID: On Sun, Jul 8, 2018 at 8:12 PM Perry E. Metzger wrote: > On Sun, 8 Jul 2018 16:27:54 -0700 Bakul Shah > wrote: > [snip] > > This sort of magic incantation is needed because no one has bothered > > to create a simple library for autocompletion & no standard > > convention has sprung up that a program can use. > > Yes, I know. That's exactly what I'm explaining. Read the URL above. > It describes a quite general mechanism for a program to convey to the > shell, without needing any special binary support, how autocompletion > should work. I read that article and it wasn't clear to me that the `--autocomplete` argument sent anything back to the shell. I suppose you could use it with bash/zsh style completion script to get something analogous to context-sensitive help, but it relies on exec'ing the command (clang or whatever) and getting output from it that someone or something can parse and present back to the user in the form of a partial command line; the examples they had seemed to be when invoking from within emacs and an X-based program to build up a command. This is rather unlike how TOPS-20 did it, wherein an image was run-up in the user's process (effectively, the program was exec'ed, though under TOPS-20 exec didn't overwrite the image of the currently running program like the shell) and asked about completions. The critical difference is that under TOPS-20 context-sensitive help and completions were provided by the already-runnable/running program. In Unix, the program must be re-exec'd. It's not a terrible model, but it's very different in implementation. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Mon Jul 9 10:56:59 2018 From: lm at mcvoy.com (Larry McVoy) Date: Sun, 8 Jul 2018 17:56:59 -0700 Subject: [COFF] Other OSes? In-Reply-To: References: <20180705055650.GA2170@minnie.tuhs.org> <20180708165006.21a7429e@jabberwock.cb.piermont.com> Message-ID: <20180709005659.GO26072@mcvoy.com> On Sun, Jul 08, 2018 at 08:05:29PM -0400, Dan Cross wrote: > On Sun, Jul 8, 2018 at 7:28 PM Bakul Shah wrote: > > [...] > > I believe autocompletion has been available for 20+ years. IIRC, I > > switched to zsh in 1995 and it has had autocompletion then. But you > > do have to teach zsh/bash how to autocomplete for a given program. > > csh has had filename auto-completion since the late 70s or early 80s, > though nowhere as rich or as full-featured as bash/zsh, let alone TOPS-20. Yeah, I'm gonna go with what I learned about TOPS-20, I didn't know that, that's way way better than any auto complition I've seen on Unix. > This. Much of the issue with Unix was convention, or rather, lack of a > consistent convention. Proponents of DEC operating systems that I've known > decry that Unix can't do stuff like, `RENAME *.FTN *.FOR`, because the http://mcvoy.com/lm/move fixes that. Since around the late 80's. > main(argc, argv) > int argc; > char *argv[]; > { > if (--argc > 0 && *argv[1] == '-') { > argv++; > while (*++*argv) > switch (**argv) { > case 'a': > /* etc.... */ > continue; > } > } > /* And so on.... */ > > I mean, goodness: we didn't even use getopt(3)! It was all hand-rolled! And > thus inconsistent. Gotta agree with this one. Getopt should have been a thing from day one. We rolled our own that I like: http://repos.bkbits.net/bk/dev/src/libc/utils/getopt.c?PAGE=anno&REV=56cf7e34186wKr7L6Lpntw_hwahS0A From bakul at bitblocks.com Mon Jul 9 12:00:31 2018 From: bakul at bitblocks.com (Bakul Shah) Date: Sun, 8 Jul 2018 19:00:31 -0700 Subject: [COFF] Other OSes? In-Reply-To: References: <20180705055650.GA2170@minnie.tuhs.org> <20180708165006.21a7429e@jabberwock.cb.piermont.com> <20180708201151.03aa46c0@jabberwock.cb.piermont.com> Message-ID: <9C710876-D8BC-4CC6-B1A2-2B1F7C066033@bitblocks.com> On Jul 8, 2018, at 5:19 PM, Dan Cross wrote: > > On Sun, Jul 8, 2018 at 8:12 PM Perry E. Metzger wrote: > On Sun, 8 Jul 2018 16:27:54 -0700 Bakul Shah > wrote: > [snip] > > This sort of magic incantation is needed because no one has bothered > > to create a simple library for autocompletion & no standard > > convention has sprung up that a program can use. > > Yes, I know. That's exactly what I'm explaining. Read the URL above. > It describes a quite general mechanism for a program to convey to the > shell, without needing any special binary support, how autocompletion > should work. > > I read that article and it wasn't clear to me that the `--autocomplete` argument sent anything back to the shell. I suppose you could use it with bash/zsh style completion script to get something analogous to context-sensitive help, but it relies on exec'ing the command (clang or whatever) and getting output from it that someone or something can parse and present back to the user in the form of a partial command line; the examples they had seemed to be when invoking from within emacs and an X-based program to build up a command. It wasn't clear to me either. I looked at the clang webpage again and all I see is autocompletion support for clang itself. [In a way this points to the real problem of clang having too many options] > This is rather unlike how TOPS-20 did it, wherein an image was run-up in the user's process (effectively, the program was exec'ed, though under TOPS-20 exec didn't overwrite the image of the currently running program like the shell) and asked about completions. The critical difference is that under TOPS-20 context-sensitive help and completions were provided by the already-runnable/running program. In Unix, the program must be re-exec'd. It's not a terrible model, but it's very different in implementation. When we did cisco CLI support in our router product, syntax for all the commands was factored out. So for example show.ip help IP information show.ip.route help IP routing table exec cmd_showroutes() show.ip.route.ADDR help IP address verify verify_ipaddr() exec cmd_showroutes() etc. This was a closed universe of commands so easy to extend. Perhaps something similar can be done, where in a command src dir you also store allowed syntax. This can be compiled to a syntax tree and attached to the binary using some convention. The cmd binary need not deal with this (except perhaps when help or verify required cmd specific support). I ended up replicating this sort of thing twice since then. From tytso at mit.edu Mon Jul 9 11:56:50 2018 From: tytso at mit.edu (Theodore Y. Ts'o) Date: Sun, 8 Jul 2018 21:56:50 -0400 Subject: [COFF] Other OSes? In-Reply-To: <129a13eb-de93-3d6b-b7b5-d0df13e60c87@spamtrap.tnetconsulting.net> References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> <129a13eb-de93-3d6b-b7b5-d0df13e60c87@spamtrap.tnetconsulting.net> Message-ID: <20180709015650.GA29373@thunk.org> On Fri, Jul 06, 2018 at 09:38:12AM -0600, Grant Taylor wrote: > > With the advent of glass teletypes, shell scripts simply evaporated -- > > there was no equivalent.  (yes, there were programs like sed, but it > > wasn't the same...).  Changing, e.g., a function name oin 10 files got a > > lot more tedious. > > I don't understand that at all. How did glass ttys (screens) change what > people do / did on unix? I never used Unix on teletypes; when I was using an ASR-35 and a KSR-33 teletype, it was connected to a PDP-8/i and PDP-15/30, although both did have a line editor that was very similar to /bin/ed. (This is why to this day if I'm on a slow link or am running in a reduced rescue environment, I fall back to /bin/ed, not /bin/vi --- my finger macros are more efficient using /bin/ed than /bin/vi.) At least for me, the huge difference that made a difference to how I would use a computer primarily had to do with speed that could be sent from a computer. So even when using a glass tty, if there was 300 or 1200 bps modem between me and the computer, I would be much more likely to use editor scripts --- and certainly, I'd be much more likely to use a line editor than anything curses-oriented, whether it's vim or emacs. I'd also be much more thoughtful about figuring out how to carefully do a global search and replace in a way that wouldn't accidentally make the wrong change. Forcing myself to think for a minute or two about how do clever global search and replaces was well worth it when there was a super-thin pipe between me and the computer. These days, I'll just use emacs's query-replace, which will allow me to approve each change in context, either for each change, or once I'm confident that I got the simple-search-and-replace, or regexp-search-and-replace right, have it do the rest of the changes w/o approval. > What could / would you do at a shell prompt pre-glass-TTYs that you can't do > the same now with glass-TTYs? It's not what you *can't* do with a glass-tty. It's just that with a glass-tty, I'm much more likely to rely on incremental searches of my bash command-line history to execute previous commands, possibly with some changes, because it's more convenient than firing up an editor and creating a shell script. But there have been times, even recently, when I've been stuck behind a slow link (say, because of a crappy hotel network), where I'll find myself reverting, at least partially, to my old teletype / 1200 modem habits. - Ted From crossd at gmail.com Mon Jul 9 12:23:53 2018 From: crossd at gmail.com (Dan Cross) Date: Sun, 8 Jul 2018 22:23:53 -0400 Subject: [COFF] Other OSes? In-Reply-To: <20180709005659.GO26072@mcvoy.com> References: <20180705055650.GA2170@minnie.tuhs.org> <20180708165006.21a7429e@jabberwock.cb.piermont.com> <20180709005659.GO26072@mcvoy.com> Message-ID: On Sun, Jul 8, 2018 at 8:57 PM Larry McVoy wrote: > On Sun, Jul 08, 2018 at 08:05:29PM -0400, Dan Cross wrote: > > This. Much of the issue with Unix was convention, or rather, lack of a > > consistent convention. Proponents of DEC operating systems that I've > known > > decry that Unix can't do stuff like, `RENAME *.FTN *.FOR`, because the > > http://mcvoy.com/lm/move > > fixes that. Since around the late 80's. > Oh sure, you can do it. Quoting the glob patterns to avoid shell expansion is trivial, and then writing a command to expand the patterns oneself (as you've clearly done) isn't too bad. But that's not the point. The point is that there was no standard way to do it: is there a command to translate character sets, such as `transcs *.latin1 *.utf8` ? Incidentally, I think that `move` sort of supports the thesis: I see you pulled in Ozan's regex and Guido's globbing code and _distributed them with move_. This latter part is important: I imagine you did that because the functionality wasn't part of the base operating system image. (Pretty cool commands, by the way; I'm going to pull that one down locally.) > main(argc, argv) > > int argc; > > char *argv[]; > > { > > if (--argc > 0 && *argv[1] == '-') { > > argv++; > > while (*++*argv) > > switch (**argv) { > > case 'a': > > /* etc.... */ > > continue; > > } > > } > > /* And so on.... */ > > > > I mean, goodness: we didn't even use getopt(3)! It was all hand-rolled! > And > > thus inconsistent. > > Gotta agree with this one. Getopt should have been a thing from day one. > We rolled our own that I like: > > > http://repos.bkbits.net/bk/dev/src/libc/utils/getopt.c?PAGE=anno&REV=56cf7e34186wKr7L6Lpntw_hwahS0A That's pretty cool. But again, I don't think it's that these things weren't *possible* under Unix (obviously they were and are) but that they weren't _conventional_. I don't think a VAX running VMS would have immolated itself if you didn't use the VMS routines to do wildcard processing, but it would probably have been considered strange. Under Unix that was the norm. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From crossd at gmail.com Mon Jul 9 12:44:23 2018 From: crossd at gmail.com (Dan Cross) Date: Sun, 8 Jul 2018 22:44:23 -0400 Subject: [COFF] Other OSes? In-Reply-To: <20180708163150.0c9e1870@jabberwock.cb.piermont.com> References: <20180705055650.GA2170@minnie.tuhs.org> <20180708163150.0c9e1870@jabberwock.cb.piermont.com> Message-ID: On Sun, Jul 8, 2018 at 4:31 PM Perry E. Metzger wrote: > [snip] > On Wed, 4 Jul 2018 23:40:36 -0700 Bakul Shah > wrote: > > - Capabilities (a number of OSes implemented them -- See Hank > > Levy's book: https://homes.cs.washington.edu/~levy/capabook/ > > A fascinating resurrection of capabilities in a kind of pleasant > Unixy-way can be found in Robert Watson's "Capsicum" add-ons for > FreeBSD and Linux. I wish it was more widely known about and adopted. > > > - Namespaces (plan9) > > A good choice I think. Interestingly, it was in talking with Ben Laurie about a potential security model for Akaros that I realized the correspondence between Plan9-style namespaces and capabilities. Since Akaros had imported a lot of plan9 namespace code, we concluded that we already had the building blocks for a capability-style security model with minimal additional work. I subsequently concluded that Capsicum wasn't strong enough to be complete. By way of example, I refer back to my earlier question about networking. For example, rights(4) on FreeBSD lists CAP_BIND, CAP_CONNECT, CAP_ACCEPT and other socket related capabilities. That's fine for modeling whether a process can make a *system call*, but insufficient for validating the parameters of that system call. Being granted CAP_CONNECT lets me call connect(2), but how do I model a capability for establishing a TCP connection to some restricted set of ports on a restricted set of destination hosts? CAP_BIND let's me call bind(2), but is there some mechanism to represent the capability of only being able to bind(2) to TCP port 12345? I imagine I'd want to make it so that I can restrict the network 5-tuple a particular process can interact with based on some local policy, but Capsicum as it stands seems too coarse-grained for that. Similarly with file open's, etc. Curiously, though it wasn't specifically designed for it, but it seems like the namespace-style representation of capabilities would be simultaneously more Unix-like and stronger: since the OS only has system calls for interacting with file-like objects (and processes, but ignore those for a moment) in the current namespace, a namespace can be constructed with *exactly* the resources the process can interact with. Given that networking was implemented as a filesystem, one can imagine interposing a proxy filesystem that represents the network stack to a process, but imposes policy restrictions on how the stack is used (e.g., inspecting connect establishment requests and restricting them to a set of 5-tuples, etc). - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From crossd at gmail.com Mon Jul 9 12:51:00 2018 From: crossd at gmail.com (Dan Cross) Date: Sun, 8 Jul 2018 22:51:00 -0400 Subject: [COFF] Other OSes? In-Reply-To: <20180706054302.72718156E400@mail.bitblocks.com> References: <20180705055650.GA2170@minnie.tuhs.org> <20180706054302.72718156E400@mail.bitblocks.com> Message-ID: On Fri, Jul 6, 2018 at 1:43 AM Bakul Shah wrote: > [snip some very interesting and insightful comments] > Mill ideas are very much worth exploring. It will be possible > to build highly secure systems with it -- if it ever gets > sufficiently funded and built! IMHO layers of mapping as with > virtualization/containerization are not really needed for > better security or isolation. > Sure, with emphasis on that "if it ever gets sufficiently funded and built!" part. :-) It sounds cool, but what to do on extant hardware? Similarly with CHERI: they change nearly everything (including the hardware). > 2. Is mmap() *really* the best we can do for mapping arbitrary resources > > into an address space? > > I think this is fine. Even remote objects mmapping should > work! > Sure, but is it the *best* we can do? Subjectively, the interface is pretty ugly, and we're forced into a multi-level store. Maybe that's OK; it sure seems like we haven't come up with anything better. But I wonder whether that's because we've found some local maxima in our pursuit of functionality vs cost, or because we're so stuck in the model of multi-level stores and mapping objects into address spaces that we can't see beyond it. And it sure would be nice if the ergonomics of the programming interface were better. > 3. A more generalized message passing system would be cool. Something > where > > you could send a message with a payload somewhere in a synchronous way > > would be nice (perhaps analogous to channels). VMS-style mailboxes would > > have been neat. > > Erlang. Carl Hewitt's Actor model has this. > > [1] > http://tierra.aslab.upm.es/~sanz/cursos/DRTS/AlphaRtDistributedKernel.pdf I'm going to read that paper, but it's at least a couple of decades old (one of the authors is affiliated with DEC). - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bill at CORCORAN.LIFE Mon Jul 9 13:02:22 2018 From: bill at CORCORAN.LIFE (William Corcoran) Date: Mon, 9 Jul 2018 03:02:22 +0000 Subject: [COFF] Origination of awful security design [COFF, COFF] In-Reply-To: <9C710876-D8BC-4CC6-B1A2-2B1F7C066033@bitblocks.com> References: <20180705055650.GA2170@minnie.tuhs.org> <20180708165006.21a7429e@jabberwock.cb.piermont.com> <20180708201151.03aa46c0@jabberwock.cb.piermont.com> , <9C710876-D8BC-4CC6-B1A2-2B1F7C066033@bitblocks.com> Message-ID: <78B2B97A-B0F8-47B8-8CE9-93B746DF981C@CORCORAN.LIFE> Okay, who or what organization did it? A wretched security issue that remains in many UNIX variants even to this day... Is this an example of boneheaded groupthink or a lone wolf trying to achieve parallel structure in design without thinking of the consequences: There is a command called “last.” Typically, when used with -R you can tell the last successful logins on the system along with origination IP. Very clean command. Then, presumably, someone else comes along and writes its corollary: lastb The “lastb” command is the “last” command’s evil twin. The lastb command is one of the finest examples on the need for taking human nature, if not common sense, into the deliberation process when deciding available commands and functionality. The “lastb” command takes whatever was keyed into login that fails to register as a successful login and writes the output to file. Thus, any user making a fat finger mistake is certain to key in a password into the “login:” (log name) request of the login command over time. Over time, say three (3) years with even a small user population, passwords will be revealed in the lastb file: btmp, bwtmp, and so on. The problem, of course, is that there is no attempt to encrypt this file used by lastb on many UNIX systems. Oh, your cron kicks off a job that truncates the files used by lastb every week you say? That’s playing fast and loose. Anyway, this reminds me of a trick a ten year old (the boss’s kid) pranked on everyone in the office any chance he could: In 1985, this little hacker would silently stand by your terminal as you return from lunch and just after you enter your response to login:, he would reach over and depress the return key directly after you depressed the return key in response to login. Well, that double return key action would enter a return in response to password. The login command would promptly comply and print login again. As you keep your momentum of logging in, not really realizing you were just victimized, you press on and enter your password followed by a return key. At that point, echo is turned on and surprisingly (to the uninitiated anyway) your password appears on the tube for all to see. And, yes, login complies with its security unconscious codebase (assuming you pressed return after your password was displayed) and this further memorializes the nefarious transaction by writing the offense into the file used by lastb. For example, a relatively recent version of a well known UNIX still has the lastb command. It can be disabled. Look, if you really want the lastb feature, why not encrypt it. Or, only allow lastb entries where a particular login exists on the system. (Although, this too is an awful practice as you clue the hacker into valid logins. Our Founding Fathers taught us that just counting the milliseconds (now even more granularity than a millisecond is required) of response time can alert a savvy hacker as to the validity of a login. Just knowing that a login is valid wins the hacker almost half the battle. Here, my complaint is that the nearly clear text password is likely placed in the files used by lastb over time. I realize this functionality can easily be turned off. But, human nature reminds us that simply having dangerous functionality available is a problem. Can anyone on our Forum elucidate the impetus leading to the rationale for the lastb and why it’s warts have been around for so long? Bill Corcoran From gtaylor at tnetconsulting.net Mon Jul 9 13:25:27 2018 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Sun, 8 Jul 2018 21:25:27 -0600 Subject: [COFF] Other OSes? In-Reply-To: <20180709015650.GA29373@thunk.org> References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> <129a13eb-de93-3d6b-b7b5-d0df13e60c87@spamtrap.tnetconsulting.net> <20180709015650.GA29373@thunk.org> Message-ID: <3bcafd7f-26be-8770-c754-b179e9cff4a5@spamtrap.tnetconsulting.net> On 07/08/2018 07:56 PM, Theodore Y. Ts'o wrote: > I never used Unix on teletypes; when I was using an ASR-35 and a KSR-33 > teletype, it was connected to a PDP-8/i and PDP-15/30, although both > did have a line editor that was very similar to /bin/ed. (This is why > to this day if I'm on a slow link or am running in a reduced rescue > environment, I fall back to /bin/ed, not /bin/vi --- my finger macros > are more efficient using /bin/ed than /bin/vi.) Please forgive my assumption and ignorance. What OS ran on the PDP-8/i or PDP-15/30? I fully get falling back to old habits that work well, especially in a constrained environment. > At least for me, the huge difference that made a difference to how I > would use a computer primarily had to do with speed that could be sent > from a computer. So even when using a glass tty, if there was 300 or > 1200 bps modem between me and the computer, I would be much more likely > to use editor scripts --- and certainly, I'd be much more likely to use > a line editor than anything curses-oriented, whether it's vim or emacs. Doing different things based on the (lack of) speed of the connection makes complete sense. > I'd also be much more thoughtful about figuring out how to carefully > do a global search and replace in a way that wouldn't accidentally > make the wrong change. Forcing myself to think for a minute or two > about how do clever global search and replaces was well worth it when > there was a super-thin pipe between me and the computer. These days, > I'll just use emacs's query-replace, which will allow me to approve each > change in context, either for each change, or once I'm confident that I > got the simple-search-and-replace, or regexp-search-and-replace right, > have it do the rest of the changes w/o approval. In light of the (lack of) speed aspect above, that seems perfectly reasonable. I too do something similar in vi(m) as far as confirming some changes as I gain trust that they are doing the proper thing. > It's not what you *can't* do with a glass-tty. It's just that with a > glass-tty, I'm much more likely to rely on incremental searches of my > bash command-line history to execute previous commands, possibly with > some changes, because it's more convenient than firing up an editor and > creating a shell script. ACK > But there have been times, even recently, when I've been stuck behind a > slow link (say, because of a crappy hotel network), where I'll find myself > reverting, at least partially, to my old teletype / 1200 modem habits. Fair. Will you please elaborate on what you mean by "editor scripts"? That's a term that I'm not familiar with. — I didn't see an answer to this question, so I'm asking again. -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3982 bytes Desc: S/MIME Cryptographic Signature URL: From gtaylor at tnetconsulting.net Mon Jul 9 13:30:20 2018 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Sun, 8 Jul 2018 21:30:20 -0600 Subject: [COFF] Today I learned something new about File Transfer Protocol. In-Reply-To: <20180708162833.2de9eca0@jabberwock.cb.piermont.com> References: <4d26a222-b3d8-eb17-3d61-4d2c1d69fefb@spamtrap.tnetconsulting.net> <20180708162833.2de9eca0@jabberwock.cb.piermont.com> Message-ID: <911994f5-9433-6967-8701-9b27c34d2471@spamtrap.tnetconsulting.net> On 07/08/2018 02:28 PM, Perry E. Metzger wrote: > In the old old days, FTP had all sorts of other functions, including > mail transfer. (SMTP obsoleted that by 1980 or so.) Do you have any pointers handy (read: in cache) to where I can read more about FTP's involvement in early email? If not, I'll do some searches and see what I can come up with. If nothing else, I suspect that early RFCs 821 (?) will make references to what it's supplanting. -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3982 bytes Desc: S/MIME Cryptographic Signature URL: From crossd at gmail.com Mon Jul 9 13:35:43 2018 From: crossd at gmail.com (Dan Cross) Date: Sun, 8 Jul 2018 23:35:43 -0400 Subject: [COFF] Other OSes? In-Reply-To: <3bcafd7f-26be-8770-c754-b179e9cff4a5@spamtrap.tnetconsulting.net> References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> <129a13eb-de93-3d6b-b7b5-d0df13e60c87@spamtrap.tnetconsulting.net> <20180709015650.GA29373@thunk.org> <3bcafd7f-26be-8770-c754-b179e9cff4a5@spamtrap.tnetconsulting.net> Message-ID: On Sun, Jul 8, 2018 at 11:24 PM Grant Taylor via COFF wrote: > On 07/08/2018 07:56 PM, Theodore Y. Ts'o wrote: > > [snip] > At least for me, the huge difference that made a difference to how I > > would use a computer primarily had to do with speed that could be sent > > from a computer. So even when using a glass tty, if there was 300 or > > 1200 bps modem between me and the computer, I would be much more likely > > to use editor scripts --- and certainly, I'd be much more likely to use > > a line > [snip] > > Will you please elaborate on what you mean by "editor scripts"? That's > a term that I'm not familiar with. — I didn't see an answer to this > question, so I'm asking again. > Back in the days of line editors, which read their commands from the standard input and were relatively simple programs as far as their user interface was concerned, you could put a set of editor commands into a file and run it sort of like a shell script. This way, you could run the same sequence of commands against (potentially) many files. Think something like: $ cat >scr.ed g/unix/s/unix/Unix/g w q ^D $ for f in *.ms; do ed $f << scr.ed; done; unset f ... Back in the days of teletypes, line editors were of course the only things we had. When we moved to glass TTYs with cursor addressing we got richer user interfaces, but with those came more complex input handling (often reading directly from the terminal in "raw" mode), which meant that scripting the editor was harder, as you usually couldn't just redirect a file into its stdin. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From imp at bsdimp.com Mon Jul 9 13:36:52 2018 From: imp at bsdimp.com (Warner Losh) Date: Sun, 8 Jul 2018 21:36:52 -0600 Subject: [COFF] Today I learned something new about File Transfer Protocol. In-Reply-To: <911994f5-9433-6967-8701-9b27c34d2471@spamtrap.tnetconsulting.net> References: <4d26a222-b3d8-eb17-3d61-4d2c1d69fefb@spamtrap.tnetconsulting.net> <20180708162833.2de9eca0@jabberwock.cb.piermont.com> <911994f5-9433-6967-8701-9b27c34d2471@spamtrap.tnetconsulting.net> Message-ID: On Sun, Jul 8, 2018, 9:29 PM Grant Taylor via COFF wrote: > On 07/08/2018 02:28 PM, Perry E. Metzger wrote: > > In the old old days, FTP had all sorts of other functions, including > > mail transfer. (SMTP obsoleted that by 1980 or so.) > > Do you have any pointers handy (read: in cache) to where I can read more > about FTP's involvement in early email? > > If not, I'll do some searches and see what I can come up with. If > nothing else, I suspect that early RFCs 821 (?) will make references to > what it's supplanting. > They are clearly documented in the early FTP RFCs. 753 sticks in my head. But the index will help you find the right ones... Warner. > -- > Grant. . . . > unix || die > > _______________________________________________ > COFF mailing list > COFF at minnie.tuhs.org > https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff -------------- next part -------------- An HTML attachment was scrubbed... URL: From gtaylor at tnetconsulting.net Mon Jul 9 13:43:10 2018 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Sun, 8 Jul 2018 21:43:10 -0600 Subject: [COFF] Other OSes? In-Reply-To: References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> <129a13eb-de93-3d6b-b7b5-d0df13e60c87@spamtrap.tnetconsulting.net> <20180709015650.GA29373@thunk.org> <3bcafd7f-26be-8770-c754-b179e9cff4a5@spamtrap.tnetconsulting.net> Message-ID: On 07/08/2018 09:35 PM, Dan Cross wrote: > Back in the days of line editors, which read their commands from the > standard input and were relatively simple programs as far as their user > interface was concerned, you could put a set of editor commands into a > file and run it sort of like a shell script. This way, you could run the > same sequence of commands against (potentially) many files. Think > something like: ACK I figured that you were referring to something like that. But I wanted to ask in case there was something else that I didn't know about but could benefit from knowing. I.e. vimscript. > $ cat >scr.ed > g/unix/s/unix/Unix/g > w > q > ^D > $ for f in *.ms; do ed $f << scr.ed; done; unset f > ... Nice global command. Run the substitution (globally on the line) on any line containing "unix". I like it. ;-) The double << is different than what I would expect. I wonder if that's specific to the shell or appending to the input after the file? > Back in the days of teletypes, line editors were of course the only > things we had. When we moved to glass TTYs with cursor addressing we got > richer user interfaces, but with those came more complex input handling > (often reading directly from the terminal in "raw" mode), which meant > that scripting the editor was harder, as you usually couldn't just > redirect a file into its stdin. That makes sense. Thank you for the explanation. -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3982 bytes Desc: S/MIME Cryptographic Signature URL: From imp at bsdimp.com Mon Jul 9 13:52:09 2018 From: imp at bsdimp.com (Warner Losh) Date: Sun, 8 Jul 2018 21:52:09 -0600 Subject: [COFF] Other OSes? In-Reply-To: References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> <129a13eb-de93-3d6b-b7b5-d0df13e60c87@spamtrap.tnetconsulting.net> <20180709015650.GA29373@thunk.org> <3bcafd7f-26be-8770-c754-b179e9cff4a5@spamtrap.tnetconsulting.net> Message-ID: I've often thought it a mistake to have main() be the only entry point. There's a simple elegance in that, I'll grant, but if there were a parse_args() entry point that also allowed for completion that was called before main, and that you'd only proceed to main on a complete command line, you could easily put the burdon of command line completion into the programs. They know that arg3 is a filename or a sub command or a network interface or an alternate name of cthulhu. tcsh does command completion, but in a half-assed, gotta know about all the commands and have a giant table that duplicates all the programs in the system's parsers, which isn't scalable. It's had that since this 80's as has bash, and both have been lame equally as long. It would also let the program do 'noise words' like TOPS-20 did w/o having to actually parse them... clang --complete is an interesting variation on my ideas within the realm of doing non-standard weird things and starts to place the burden of knowledge on the program itself, which is more in line with the thinking of Unix and the main stream of OOish thought we've know about since the early 70s with smalltalk and other such pioneering things. Then again, maybe my idea is too much influenced by TOPS-20 commands that became resident to do command completion, then ran more quickly because they were already at least half-loaded when the user hit return. Warner On Sun, Jul 8, 2018 at 9:43 PM, Grant Taylor via COFF wrote: > On 07/08/2018 09:35 PM, Dan Cross wrote: > >> Back in the days of line editors, which read their commands from the >> standard input and were relatively simple programs as far as their user >> interface was concerned, you could put a set of editor commands into a file >> and run it sort of like a shell script. This way, you could run the same >> sequence of commands against (potentially) many files. Think something like: >> > > ACK > > I figured that you were referring to something like that. But I wanted to > ask in case there was something else that I didn't know about but could > benefit from knowing. I.e. vimscript. > > $ cat >scr.ed >> g/unix/s/unix/Unix/g >> w >> q >> ^D >> $ for f in *.ms; do ed $f << scr.ed; done; unset f >> ... >> > > Nice global command. Run the substitution (globally on the line) on any > line containing "unix". I like it. ;-) > > The double << is different than what I would expect. I wonder if that's > specific to the shell or appending to the input after the file? > > Back in the days of teletypes, line editors were of course the only things >> we had. When we moved to glass TTYs with cursor addressing we got richer >> user interfaces, but with those came more complex input handling (often >> reading directly from the terminal in "raw" mode), which meant that >> scripting the editor was harder, as you usually couldn't just redirect a >> file into its stdin. >> > > That makes sense. Thank you for the explanation. > > > > -- > Grant. . . . > unix || die > > > _______________________________________________ > COFF mailing list > COFF at minnie.tuhs.org > https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gtaylor at tnetconsulting.net Mon Jul 9 14:02:22 2018 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Sun, 8 Jul 2018 22:02:22 -0600 Subject: [COFF] Today I learned something new about File Transfer Protocol. In-Reply-To: References: <4d26a222-b3d8-eb17-3d61-4d2c1d69fefb@spamtrap.tnetconsulting.net> <20180708162833.2de9eca0@jabberwock.cb.piermont.com> <911994f5-9433-6967-8701-9b27c34d2471@spamtrap.tnetconsulting.net> Message-ID: On 07/08/2018 09:36 PM, Warner Losh wrote: > They are clearly documented in the early FTP RFCs. 753 sticks in my > head. But the index will help you find the right ones... I found things in 765. I forgot how integrated FTP and SMTP used to be with the system. What with the ability to send to the users terminal and / or their mail file. I don't think we'd ever see such integration like that today. Though to be fair, we typically don't see multiple people interactively logged into the same system. Save for maybe mainframes or some mid-range systems (AS/400). -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3982 bytes Desc: S/MIME Cryptographic Signature URL: From tytso at mit.edu Mon Jul 9 15:23:33 2018 From: tytso at mit.edu (Theodore Y. Ts'o) Date: Mon, 9 Jul 2018 01:23:33 -0400 Subject: [COFF] Other OSes? In-Reply-To: <3bcafd7f-26be-8770-c754-b179e9cff4a5@spamtrap.tnetconsulting.net> References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> <129a13eb-de93-3d6b-b7b5-d0df13e60c87@spamtrap.tnetconsulting.net> <20180709015650.GA29373@thunk.org> <3bcafd7f-26be-8770-c754-b179e9cff4a5@spamtrap.tnetconsulting.net> Message-ID: <20180709052333.GB29373@thunk.org> On Sun, Jul 08, 2018 at 09:25:27PM -0600, Grant Taylor via COFF wrote: > > Please forgive my assumption and ignorance. What OS ran on the PDP-8/i or > PDP-15/30? There were multiple OS's for the PDP-8 and PDP-15. What I used on the PDP-8/i was the 4k disk monitoring system, so named becasue it only required 4k of 12-bit wide core memory. The resident portion of the OS only required 128 12-bit words, loaded at octal 7600, at the top of the 4k memory. It could be bootstrapped by toggling in 4 (12-bit wide) instructions into the front console which had about 24 binary switches[1]. [1] https://www.youtube.com/watch?v=yUZrn7qTGcs The OS was distributed on paper tape, which was loaded by toggling in 18 instructions of the RIM loader, which was then to load the BIN loader from paper tape into core memory. The RIM loader was designed to be simple and easy to toggle into the console. (ROM? EPROM? We don't need no stink'in firmware in read-only memories!) The BIN loader could read in a more effoiciently packed data stored in punched paper tape. The BIN loader would then be used to load the disk builder program which would install the OS into the DF-32 (which stored 32k 12-bit words on a 12" platter). Later PDP-8's would run more a sophisticated OS, such as OS/8, which had a "Concise Command Language" (CCL) that was designed to be similar to the TOPS-10 system running on the PDP-10. OS/8 was a single-user system, though; no time-sharing! The PDP-15/30 that I used had a paper tape reader and four DECtape units. It ran a background-foreground monitor. The background system was what was used for normal program development. The foreground job had unconditional priority over the background job and was used for jobs such as real-time data acquisition. When the background/foreground OS was started, initially only the foreground teletype was active. If you didn't have any foreground job to execute, you'd start the "idle" program, which once started, would then cause the background teletype to come alive and print a command prompt. So it was a tad bit more sophisticated than the 4k disk monitor system. > Will you please elaborate on what you mean by "editor scripts"? That's a > term that I'm not familiar with. — I didn't see an answer to this > question, so I'm asking again. There have been times when I'll do something like this in a shell script: #!/bin/sh for i in file1 file2 file3 ; do ed $i << EOF /^$/ n s/^Obama/Trump/ w q EOF done This is a toy example, but hopefully it gets the point across. There are times when you don't want to use a stream editor, but instead want to send a series of editor commands to an editor like /bin/ed. I suspect that younger folks would probably use something else, perhaps perl, instead. - Ted From perry at piermont.com Mon Jul 9 21:24:05 2018 From: perry at piermont.com (Perry E. Metzger) Date: Mon, 9 Jul 2018 07:24:05 -0400 Subject: [COFF] Other OSes? In-Reply-To: <20180709015650.GA29373@thunk.org> References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> <129a13eb-de93-3d6b-b7b5-d0df13e60c87@spamtrap.tnetconsulting.net> <20180709015650.GA29373@thunk.org> Message-ID: <20180709072405.1caef2b0@jabberwock.cb.piermont.com> On Sun, 8 Jul 2018 21:56:50 -0400 "Theodore Y. Ts'o" wrote: > These days, I'll just use emacs's query-replace, which will allow > me to approve each change in context, either for each change, or > once I'm confident that I got the simple-search-and-replace, or > regexp-search-and-replace right, have it do the rest of the changes > w/o approval. I often use a utility called "qsubst" that allows emacs-like query replace at the command line. I got it off the net around 1990 and haven't seen it widely distributed, but it's Damn Useful. On the more general topic: I, too, never used Unix on a printing terminal (by the time I got to it in the early 1980s everything was CRTs) and I've used shell scripts pretty consistently over the few decades. I tend not to write really long ones any more -- the advent of Perl and then languages like Ruby and Python sort of ended that -- but I write short ones a lot, and I write five-line ones at the bash prompt several times a day. (One reason why emacs/vi like command line editing is so useful to me is it lets me quickly hack up a script at the terminal prompt.) And yes, if it's got a couple of nested loops and a long pipeline or two, I think it's still a script even if I type it ad hoc. > It's not what you *can't* do with a glass-tty. It's just that with > a glass-tty, I'm much more likely to rely on incremental searches > of my bash command-line history to execute previous commands, > possibly with some changes, because it's more convenient than > firing up an editor and creating a shell script. Indeed. That's my work style as well. Perry -- Perry E. Metzger perry at piermont.com From perry at piermont.com Mon Jul 9 21:27:32 2018 From: perry at piermont.com (Perry E. Metzger) Date: Mon, 9 Jul 2018 07:27:32 -0400 Subject: [COFF] Today I learned something new about File Transfer Protocol. In-Reply-To: <911994f5-9433-6967-8701-9b27c34d2471@spamtrap.tnetconsulting.net> References: <4d26a222-b3d8-eb17-3d61-4d2c1d69fefb@spamtrap.tnetconsulting.net> <20180708162833.2de9eca0@jabberwock.cb.piermont.com> <911994f5-9433-6967-8701-9b27c34d2471@spamtrap.tnetconsulting.net> Message-ID: <20180709072732.069437b9@jabberwock.cb.piermont.com> On Sun, 8 Jul 2018 21:30:20 -0600 Grant Taylor via COFF wrote: > On 07/08/2018 02:28 PM, Perry E. Metzger wrote: > > In the old old days, FTP had all sorts of other functions, > > including mail transfer. (SMTP obsoleted that by 1980 or so.) > > Do you have any pointers handy (read: in cache) to where I can read > more about FTP's involvement in early email? > > If not, I'll do some searches and see what I can come up with. If > nothing else, I suspect that early RFCs 821 (?) will make > references to what it's supplanting. RFC 114 or 765 might cover it. Perry -- Perry E. Metzger perry at piermont.com From perry at piermont.com Mon Jul 9 21:32:41 2018 From: perry at piermont.com (Perry E. Metzger) Date: Mon, 9 Jul 2018 07:32:41 -0400 Subject: [COFF] Other OSes? In-Reply-To: References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> <129a13eb-de93-3d6b-b7b5-d0df13e60c87@spamtrap.tnetconsulting.net> <20180709015650.GA29373@thunk.org> <3bcafd7f-26be-8770-c754-b179e9cff4a5@spamtrap.tnetconsulting.net> Message-ID: <20180709073241.6babe8f4@jabberwock.cb.piermont.com> On Sun, 8 Jul 2018 21:52:09 -0600 Warner Losh wrote: > It would also let the program do 'noise > words' like TOPS-20 did w/o having to actually parse them... Noise words are a thing Unix is missing, but given the lack of CMND JSYS style completion, the reason for the lack is obvious -- nothing generates noisewords so nothing needs to ignore them. This is yet another cool thing clang's --complete hack could make widely available, though then we'd need a standard for noisewords. > clang --complete is an interesting variation on my ideas within the > realm of doing non-standard weird things and starts to place the > burden of knowledge on the program itself, which is more in line > with the thinking of Unix and the main stream of OOish thought > we've know about since the early 70s with smalltalk and other such > pioneering things. Precisely. The clang hack is exactly what one would want if it could be made popular. Perry -- Perry E. Metzger perry at piermont.com From crossd at gmail.com Mon Jul 9 21:34:28 2018 From: crossd at gmail.com (Dan Cross) Date: Mon, 9 Jul 2018 07:34:28 -0400 Subject: [COFF] Other OSes? In-Reply-To: References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> <129a13eb-de93-3d6b-b7b5-d0df13e60c87@spamtrap.tnetconsulting.net> <20180709015650.GA29373@thunk.org> <3bcafd7f-26be-8770-c754-b179e9cff4a5@spamtrap.tnetconsulting.net> Message-ID: On Sun, Jul 8, 2018 at 11:42 PM Grant Taylor via COFF wrote: > [snip] > The double << is different than what I would expect. I wonder if that's > specific to the shell or appending to the input after the file? > Actually, that was just a typo; it should have been a single '<'. The '<<' (double less-than) is how you would introduce a 'here' document, as Ted did in his example. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From perry at piermont.com Mon Jul 9 21:50:35 2018 From: perry at piermont.com (Perry E. Metzger) Date: Mon, 9 Jul 2018 07:50:35 -0400 Subject: [COFF] Other OSes? In-Reply-To: <20180709073241.6babe8f4@jabberwock.cb.piermont.com> References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> <129a13eb-de93-3d6b-b7b5-d0df13e60c87@spamtrap.tnetconsulting.net> <20180709015650.GA29373@thunk.org> <3bcafd7f-26be-8770-c754-b179e9cff4a5@spamtrap.tnetconsulting.net> <20180709073241.6babe8f4@jabberwock.cb.piermont.com> Message-ID: <20180709075035.59bf47d0@jabberwock.cb.piermont.com> On Mon, 9 Jul 2018 07:32:41 -0400 "Perry E. Metzger" wrote: > On Sun, 8 Jul 2018 21:52:09 -0600 Warner Losh > wrote: > > It would also let the program do 'noise > > words' like TOPS-20 did w/o having to actually parse them... > > Noise words are a thing Unix is missing, but given the lack of > CMND JSYS style completion, the reason for the lack is obvious -- > nothing generates noisewords so nothing needs to ignore them. This > is yet another cool thing clang's --complete hack could make widely > available, though then we'd need a standard for noisewords. Actually, it occurs to me that noisewords aren't actually needed. The printed help during completion can handle conveying the information that noisewords provided. Perry > > clang --complete is an interesting variation on my ideas within > > the realm of doing non-standard weird things and starts to place > > the burden of knowledge on the program itself, which is more in > > line with the thinking of Unix and the main stream of OOish > > thought we've know about since the early 70s with smalltalk and > > other such pioneering things. > > Precisely. The clang hack is exactly what one would want if it could > be made popular. > > Perry -- Perry E. Metzger perry at piermont.com From clemc at ccc.com Mon Jul 9 22:52:00 2018 From: clemc at ccc.com (Clem Cole) Date: Mon, 9 Jul 2018 08:52:00 -0400 Subject: [COFF] Other OSes? In-Reply-To: <20180709052333.GB29373@thunk.org> References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> <129a13eb-de93-3d6b-b7b5-d0df13e60c87@spamtrap.tnetconsulting.net> <20180709015650.GA29373@thunk.org> <3bcafd7f-26be-8770-c754-b179e9cff4a5@spamtrap.tnetconsulting.net> <20180709052333.GB29373@thunk.org> Message-ID: On Mon, Jul 9, 2018 at 1:23 AM Theodore Y. Ts'o wrote: > > There were multiple OS's for the PDP-8 and PDP-15. Right. You can find them on the PDP-8 web sites for instance. Check out sigh or better yet the PiDP-8 which I have running at with 4 OS loaded into the ROMs. The RIM loader can be loaded from the toggle switches for the purist, but the Pi will do it for you on power up if you like and don’t want the tedium. I used on the > PDP-8/i was the 4k disk monitoring system, so named becasue it only > required 4k of 12-bit wide core memory. Right I believe that was the first widely available OS and described in the small computer handbook. Remember mini (as in minicomputer) was term Gordon coined to mean minimal computer. The resident portion of the > OS only required 128 12-bit words, loaded at octal 7600, at the top of > the 4k memory. It could be bootstrapped by toggling in 4 (12-bit > wide) instructions into the front console which had about 24 binary > switches[1]. > > [1] https://www.youtube.com/watch?v=yUZrn7qTGcs > > The OS was distributed on paper tape, which was loaded by toggling in > 18 instructions of the RIM loader, which was then to load the BIN > loader from paper tape into core memory. The RIM loader was designed > to be simple and easy to toggle into the console. (ROM? EPROM? We > don't need no stink'in firmware in read-only memories!) The BIN > loader could read in a more effoiciently packed data stored in punched > paper tape. The BIN loader would then be used to load the disk > builder program which would install the OS into the DF-32 (which > stored 32k 12-bit words on a 12" platter). rIght and the RIM loaded was printed on front panel of the console. btw the disk was 19” in diamete. Danny Klein has the original disk platter from the one of the original PDP-8 - before marriage it used to hang in his living room. FYI that was the system at the computer museum from the EE Dept after it died in approx ‘75 I was there the disk crashed. > > Later PDP-8's would run more a sophisticated OS, such as OS/8, which > had a "Concise Command Language" (CCL) that was designed to be similar > to the TOPS-10 system running on the PDP-10. OS/8 was a single-user > system, though; no time-sharing! Be careful grasshopper. TSS/8 is available on those web sites although I admit it does not run on my PiDP-8 and I have not figured why (Something is corrupt and I have not spent the time or energy to chase it). Anyway, TSS/8. Supported 4-8 ASR-33 terminals, each had 4K words as you described before. There was assembler, basic, focal, Fortran-IV and an Algol circa 1965 extensions. > > > > The PDP-15/30 that I used had a paper tape reader and four DECtape > units. It ran a background-foreground monitor. The background system > was what was used for normal program development. The foreground job > had unconditional priority over the background job and was used for > jobs such as real-time data acquisition. When the > background/foreground OS was started, initially only the foreground > teletype was active. If you didn't have any foreground job to > execute, you'd start the "idle" program, which once started, would > then cause the background teletype to come alive and print a command > prompt. So it was a tad bit more sophisticated than the 4k disk > monitor system. RIght this is really the model for RT11 which would begat CP/M and last DOS-86 (aka PC-DOS, later renamed MS-DOS). -- Sent from a handheld expect more typos than usual -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Mon Jul 9 23:06:10 2018 From: clemc at ccc.com (Clem Cole) Date: Mon, 9 Jul 2018 09:06:10 -0400 Subject: [COFF] PiDP Obsolesces Guaranteed In-Reply-To: References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> <129a13eb-de93-3d6b-b7b5-d0df13e60c87@spamtrap.tnetconsulting.net> <20180709015650.GA29373@thunk.org> <3bcafd7f-26be-8770-c754-b179e9cff4a5@spamtrap.tnetconsulting.net> <20180709052333.GB29373@thunk.org> Message-ID: BTW. Just last night before I went to bed, I booted for the first time my PiDP-11/70. I have one of the first alpha units since I’ve been coaching Oscar with the project a small amount. I left it running a some tests last night and am off to USENIX ATC today, so I have not even tried to boot Unix on it yet. One of the cool things about this project, I was able to arrange for the designer of the original 11/70 front panel to be consulted (Jim Bleck) on the molds to Oscar who was able to have the molds and the switches made in Europe. Anyway, the PiDP-8 is cool and looks the same as the original 8/I, but the panel is framed/finished in wood. For the 11, the panel is the real thing - DEC white (hopefully will not fade using modern plastics). FYI Oscar is considering doing the -10 next but given the amount of work from the 8 and the 11 it’s more than a hobby. -- Sent from a handheld expect more typos than usual -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at kdbarto.org Mon Jul 9 23:10:03 2018 From: david at kdbarto.org (David) Date: Mon, 9 Jul 2018 06:10:03 -0700 Subject: [COFF] Other OSes? In-Reply-To: <9C710876-D8BC-4CC6-B1A2-2B1F7C066033@bitblocks.com> References: <20180705055650.GA2170@minnie.tuhs.org> <20180708165006.21a7429e@jabberwock.cb.piermont.com> <20180708201151.03aa46c0@jabberwock.cb.piermont.com> <9C710876-D8BC-4CC6-B1A2-2B1F7C066033@bitblocks.com> Message-ID: <746DC51F-65FB-4656-AED2-A0E244C6F453@kdbarto.org> > When we did cisco CLI support in our router product, syntax for all > the commands was factored out. So for example > > show.ip > help IP information > > show.ip.route > help IP routing table > exec cmd_showroutes() > > show.ip.route.ADDR > help IP address > verify verify_ipaddr() > exec cmd_showroutes() > > etc. This was a closed universe of commands so easy to extend. > > Perhaps something similar can be done, where in a command src dir > you also store allowed syntax. This can be compiled to a syntax > tree and attached to the binary using some convention. The cmd > binary need not deal with this (except perhaps when help or verify > required cmd specific support). > This sounds like a resource fork on a file. Much like MacOS (pre X) had for all files. It was a convenient way to keep data that defined the application associated with the application. You could also edit it to customize it for your own use if you wished. There were a great many good things about the MacOS of old. David > _______________________________________________ > COFF mailing list > COFF at minnie.tuhs.org > https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff From perry at piermont.com Mon Jul 9 23:13:21 2018 From: perry at piermont.com (Perry E. Metzger) Date: Mon, 9 Jul 2018 09:13:21 -0400 Subject: [COFF] Other OSes? In-Reply-To: References: <20180705055650.GA2170@minnie.tuhs.org> <20180708165006.21a7429e@jabberwock.cb.piermont.com> <20180708201151.03aa46c0@jabberwock.cb.piermont.com> Message-ID: <20180709091321.6705d8e9@jabberwock.cb.piermont.com> On Sun, 8 Jul 2018 20:19:18 -0400 Dan Cross wrote: > On Sun, Jul 8, 2018 at 8:12 PM Perry E. Metzger > wrote: > > > On Sun, 8 Jul 2018 16:27:54 -0700 Bakul Shah > > wrote: > > [snip] > > > This sort of magic incantation is needed because no one has > > > bothered to create a simple library for autocompletion & no > > > standard convention has sprung up that a program can use. > > > > Yes, I know. That's exactly what I'm explaining. Read the URL > > above. It describes a quite general mechanism for a program to > > convey to the shell, without needing any special binary support, > > how autocompletion should work. > > > I read that article and it wasn't clear to me that the > `--autocomplete` argument sent anything back to the shell. The shell autocompletion script parses the output. That's the point of the facility. It lets you get a TOPS-20 style completion model where you get completion and help on all options, but the shell doesn't need to be given that information ad hoc, it gets it from the program through the use of the --autocomplete flag. > I > suppose you could use it with bash/zsh style completion script to > get something analogous to context-sensitive help, That's the sole purpose of it. > but it relies on > exec'ing the command (clang or whatever) and getting output from it > that someone or something can parse and present back to the user in > the form of a partial command line; That's what it does, yes. > This is rather unlike how TOPS-20 did it, wherein an image was > run-up in the user's process (effectively, the program was exec'ed, > though under TOPS-20 exec didn't overwrite the image of the > currently running program like the shell) and asked about > completions. In this case, the program is run and asked about completions. It's a somewhat different mechanism, but Unix is a different operating system, and in the modern world, the time involved is ignorable. Perry -- Perry E. Metzger perry at piermont.com From perry at piermont.com Mon Jul 9 23:17:44 2018 From: perry at piermont.com (Perry E. Metzger) Date: Mon, 9 Jul 2018 09:17:44 -0400 Subject: [COFF] Other OSes? In-Reply-To: <9C710876-D8BC-4CC6-B1A2-2B1F7C066033@bitblocks.com> References: <20180705055650.GA2170@minnie.tuhs.org> <20180708165006.21a7429e@jabberwock.cb.piermont.com> <20180708201151.03aa46c0@jabberwock.cb.piermont.com> <9C710876-D8BC-4CC6-B1A2-2B1F7C066033@bitblocks.com> Message-ID: <20180709091744.3ff925ad@jabberwock.cb.piermont.com> On Sun, 8 Jul 2018 19:00:31 -0700 Bakul Shah wrote: > It wasn't clear to me either. I looked at the clang webpage again > and all I see is autocompletion support for clang itself. It is only autocompletion help for clang, but the point is, if you implement a similar flag for any program. If people did this consistently, across the board, it would fix the problem, cleanly and efficiently. You just add a --autocomplete flag to most programs so that they can cooperate with the shell's autocompletion and then everything is as friendly as TOPS-20 at last. Perry -- Perry E. Metzger perry at piermont.com From tytso at mit.edu Tue Jul 10 00:39:37 2018 From: tytso at mit.edu (Theodore Y. Ts'o) Date: Mon, 9 Jul 2018 10:39:37 -0400 Subject: [COFF] Other OSes? In-Reply-To: References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> <129a13eb-de93-3d6b-b7b5-d0df13e60c87@spamtrap.tnetconsulting.net> <20180709015650.GA29373@thunk.org> <3bcafd7f-26be-8770-c754-b179e9cff4a5@spamtrap.tnetconsulting.net> <20180709052333.GB29373@thunk.org> Message-ID: <20180709143937.GC29373@thunk.org> On Mon, Jul 09, 2018 at 08:52:00AM -0400, Clem Cole wrote: > > rIght and the RIM loaded was printed on front panel of the console. btw > the disk was 19” in diamete. Danny Klein has the original disk platter > from the one of the original PDP-8 - before marriage it used to hang in his > living room. FYI that was the system at the computer museum from the EE > Dept after it died in approx ‘75 I was there the disk crashed. I don't think it was 19" --- the DF32 was mounted in a 19" rack, yes. But the platter was in an enclosure which was distinctly smaller than the overall width of the DF32, and the platter was smaller still. See the pictures here[1] and here[2]. [1] https://www.pdp8.net/dfds32/dfds32.shtml [2] https://www.pdp8.net/dfds32/pics/df32diskorig.shtml?small I once physically held the DF32 platter in my hands. My dad and I pulled it out of the enclosure, wiped it down with alcohol, looked at both sides of the platter to see which was less scratched up, and put the "better" side face down on top of the fixed heads, and then screwed the platter back into place. And it worked, afterwards, too! You can't do that with today's HDD's! :-) > > Later PDP-8's would run more a sophisticated OS, such as OS/8, which > > had a "Concise Command Language" (CCL) that was designed to be similar > > to the TOPS-10 system running on the PDP-10. OS/8 was a single-user > > system, though; no time-sharing! > > Be careful grasshopper. TSS/8 is available on those web sites although I > admit it does not run on my PiDP-8 and I have not figured why (Something is > corrupt and I have not spent the time or energy to chase it). Anyway, > TSS/8. Supported 4-8 ASR-33 terminals, each had 4K words as you described > before. There was assembler, basic, focal, Fortran-IV and an Algol circa > 1965 extensions. I had forgotten about TSS/8. OS/8 was indeed only single-user, although apparently there was a multi-user BASIC interpreter which was available as an option. Our PDP-8/i only had 8k of core memory so while in theory it was possible (barely) to run OS/8, we never did. My knowledge of OS/8 was only from the manuals. (And indeed, how I first learned binary arithmatic, and programming in general, was via thet Digital's "Introduction to Programming" book[1] which I inhaled when I was maybe seven or eight.) [1] https://web.archive.org/web/20051220132023/http://www.bitsavers.org:80/pdf/dec/pdp8/IntroToProgramming1969.pdf TSS/8 was so far beyond the capabilities of the PDP-8 in my father's lab that I never spent much time learning about it. I think some of the DECUS books we had referenced it, so I knew of its existence, but not much more than that. > RIght this is really the model for RT11 which would begat CP/M and last > DOS-86 (aka PC-DOS, later renamed MS-DOS). We had a PDP-11 at my computer lab in high school. It was actually running TSX-11 (the time-sharing extension of RT-11), so it could support a dozen or so virtual instances of RT-11, where we learned PDP-11 assembler in the advanced comp-sci class. I remember two fun things about the TSX-11; the first was that when you logged out of TSX-11, it would print the time used on the console, and that was being printed by the underlying RT-11 --- so if you typed control-S right at that point, it would lock up the whole system, and all of the other users would be dead in the water. The other fun thing was if you could get physical access to the PDP-11, and brought the secondary disk off-line, and then forced a reboot, RT-11 wouldn't be able to bring the TSX-11 system up fully, and so the LA36 console would drop to a RT-11 command line prompt without asking for a password. This would allow you to run the account editing program to set up a new privileged TSX-11 account. (Basically, the equivalent of editing /etc/passwd and adding another account with uid == 0.) My knowledge of these facts was, of course, purely hypothetical. :-) In any case, that was why the first Linux FTP site in North America was named "tsx-11.mit.edu"; it was a Vax VS3800 running Ultrix that was sitting in my office when I was working at MIT as a full-time staff member. At first, tsx-11 was my personal workstation, but over time it became a dedicated full-time server and migrated to larger and more powerful machines; first a Dec Alpha running OSF/1, and later on, a more powerful Intel server running Linux. Shortly afterwards, I started working at VA Linux Systems, and while tsx-11 was operating for a while after that, after a while we shut it down since the hardware was getting old and I no longer had access to the machine room in MIT Building E40 where it lived. - Ted From clemc at ccc.com Tue Jul 10 00:46:44 2018 From: clemc at ccc.com (Clement T. Cole) Date: Mon, 9 Jul 2018 10:46:44 -0400 Subject: [COFF] Other OSes? In-Reply-To: <20180709143937.GC29373@thunk.org> References: <82df833ae2a587b386b4154fc6051356a3510b19@webmail.yaccman.com> <129a13eb-de93-3d6b-b7b5-d0df13e60c87@spamtrap.tnetconsulting.net> <20180709015650.GA29373@thunk.org> <3bcafd7f-26be-8770-c754-b179e9cff4a5@spamtrap.tnetconsulting.net> <20180709052333.GB29373@thunk.org> <20180709143937.GC29373@thunk.org> Message-ID: <0D6EEA19-AD29-4B6F-967E-A015C5D4D081@ccc.com> Your right it’s was called a 19” technology and physical platter was closer to 17 or 18 - we should ask Dan to measure it Sent from my iPad > On Jul 9, 2018, at 10:39 AM, Theodore Y. Ts'o wrote: > >> On Mon, Jul 09, 2018 at 08:52:00AM -0400, Clem Cole wrote: >> >> rIght and the RIM loaded was printed on front panel of the console. btw >> the disk was 19” in diamete. Danny Klein has the original disk platter >> from the one of the original PDP-8 - before marriage it used to hang in his >> living room. FYI that was the system at the computer museum from the EE >> Dept after it died in approx ‘75 I was there the disk crashed. > > I don't think it was 19" --- the DF32 was mounted in a 19" rack, yes. > But the platter was in an enclosure which was distinctly smaller than > the overall width of the DF32, and the platter was smaller still. See > the pictures here[1] and here[2]. > > [1] https://www.pdp8.net/dfds32/dfds32.shtml > [2] https://www.pdp8.net/dfds32/pics/df32diskorig.shtml?small > > I once physically held the DF32 platter in my hands. My dad and I > pulled it out of the enclosure, wiped it down with alcohol, looked at > both sides of the platter to see which was less scratched up, and put > the "better" side face down on top of the fixed heads, and then > screwed the platter back into place. And it worked, afterwards, too! > You can't do that with today's HDD's! :-) > >>> Later PDP-8's would run more a sophisticated OS, such as OS/8, which >>> had a "Concise Command Language" (CCL) that was designed to be similar >>> to the TOPS-10 system running on the PDP-10. OS/8 was a single-user >>> system, though; no time-sharing! >> >> Be careful grasshopper. TSS/8 is available on those web sites although I >> admit it does not run on my PiDP-8 and I have not figured why (Something is >> corrupt and I have not spent the time or energy to chase it). Anyway, >> TSS/8. Supported 4-8 ASR-33 terminals, each had 4K words as you described >> before. There was assembler, basic, focal, Fortran-IV and an Algol circa >> 1965 extensions. > > I had forgotten about TSS/8. OS/8 was indeed only single-user, > although apparently there was a multi-user BASIC interpreter which was > available as an option. Our PDP-8/i only had 8k of core memory so > while in theory it was possible (barely) to run OS/8, we never did. > My knowledge of OS/8 was only from the manuals. (And indeed, how I > first learned binary arithmatic, and programming in general, was via > thet Digital's "Introduction to Programming" book[1] which I inhaled > when I was maybe seven or eight.) > > [1] https://web.archive.org/web/20051220132023/http://www.bitsavers.org:80/pdf/dec/pdp8/IntroToProgramming1969.pdf > > TSS/8 was so far beyond the capabilities of the PDP-8 in my father's > lab that I never spent much time learning about it. I think some of > the DECUS books we had referenced it, so I knew of its existence, but > not much more than that. > >> RIght this is really the model for RT11 which would begat CP/M and last >> DOS-86 (aka PC-DOS, later renamed MS-DOS). > > We had a PDP-11 at my computer lab in high school. It was actually > running TSX-11 (the time-sharing extension of RT-11), so it could > support a dozen or so virtual instances of RT-11, where we learned > PDP-11 assembler in the advanced comp-sci class. > > I remember two fun things about the TSX-11; the first was that when > you logged out of TSX-11, it would print the time used on the console, > and that was being printed by the underlying RT-11 --- so if you typed > control-S right at that point, it would lock up the whole system, and > all of the other users would be dead in the water. > > The other fun thing was if you could get physical access to the > PDP-11, and brought the secondary disk off-line, and then forced a > reboot, RT-11 wouldn't be able to bring the TSX-11 system up fully, > and so the LA36 console would drop to a RT-11 command line prompt > without asking for a password. This would allow you to run the > account editing program to set up a new privileged TSX-11 account. > (Basically, the equivalent of editing /etc/passwd and adding another > account with uid == 0.) > > My knowledge of these facts was, of course, purely hypothetical. :-) > > In any case, that was why the first Linux FTP site in North America > was named "tsx-11.mit.edu"; it was a Vax VS3800 running Ultrix that > was sitting in my office when I was working at MIT as a full-time > staff member. At first, tsx-11 was my personal workstation, but over > time it became a dedicated full-time server and migrated to larger and > more powerful machines; first a Dec Alpha running OSF/1, and later on, > a more powerful Intel server running Linux. Shortly afterwards, I > started working at VA Linux Systems, and while tsx-11 was operating > for a while after that, after a while we shut it down since the > hardware was getting old and I no longer had access to the machine > room in MIT Building E40 where it lived. > > - Ted From bakul at bitblocks.com Tue Jul 10 15:30:32 2018 From: bakul at bitblocks.com (Bakul Shah) Date: Mon, 09 Jul 2018 22:30:32 -0700 Subject: [COFF] Other OSes? In-Reply-To: Your message of "Sun, 08 Jul 2018 22:44:23 -0400." References: <20180705055650.GA2170@minnie.tuhs.org> <20180708163150.0c9e1870@jabberwock.cb.piermont.com> Message-ID: <20180710053039.BA35C156E400@mail.bitblocks.com> On Sun, 08 Jul 2018 22:44:23 -0400 Dan Cross wrote: Dan Cross writes: > > On Sun, Jul 8, 2018 at 4:31 PM Perry E. Metzger wrote: > > > [snip] > > On Wed, 4 Jul 2018 23:40:36 -0700 Bakul Shah > > wrote: > > > - Capabilities (a number of OSes implemented them -- See Hank > > > Levy's book: https://homes.cs.washington.edu/~levy/capabook/ > > > > A fascinating resurrection of capabilities in a kind of pleasant > > Unixy-way can be found in Robert Watson's "Capsicum" add-ons for > > FreeBSD and Linux. I wish it was more widely known about and adopted. > > > > > - Namespaces (plan9) > > > > A good choice I think. > > > Interestingly, it was in talking with Ben Laurie about a potential security > model for Akaros that I realized the correspondence between Plan9-style > namespaces and capabilities. I tend to think they are orthogonal. Namspaces map names to objects -- they control what objects you can see. Capabilities control what you can do with such an object. Cap operations such as revoke, grant, attenunating rights (subsetting the allowed ops) don't translate to namespaces. A cap is much like a file descriptor but it doesn't have to be limited to just open files or directories. As an example, a client make invoke "read(fd, buffer, count)" or "write(fd, buffer, count)". This call may be serviced by a remote file server. Here "buffer" would be a cap on a memory range within the client process -- we don't want the server to have any more access to client's memory. When the fileserver is ready to read or write, it has to securely arrange data transfer to/from this memory range[1]. You can even think of a cap as a "promise" -- you can perform certain operations in future as opposed to right now. [Rather than read from memory and send the data along with the write() call, the server can use the buffer cap in future. The file server in turn can pass on the buffer cap to a storage server, there by reducing latency and saving a copy or two] > Since Akaros had imported a lot of plan9 > namespace code, we concluded that we already had the building blocks for a > capability-style security model with minimal additional work. I > subsequently concluded that Capsicum wasn't strong enough to be complete. The point of my original comment was that if capabilities were integral to a system (as opposed to being an add-on like Capsicum) a much simpler Unix might've emerged. [1] In the old days we used copyin/copyout for this sort of data transfer between a user process address space and the kernel. This is analogous. From bakul at bitblocks.com Tue Jul 10 15:41:08 2018 From: bakul at bitblocks.com (Bakul Shah) Date: Mon, 09 Jul 2018 22:41:08 -0700 Subject: [COFF] Other OSes? In-Reply-To: Your message of "Sun, 08 Jul 2018 22:51:00 -0400." References: <20180705055650.GA2170@minnie.tuhs.org> <20180706054302.72718156E400@mail.bitblocks.com> Message-ID: <20180710054115.6F4AF156E400@mail.bitblocks.com> On Sun, 08 Jul 2018 22:51:00 -0400 Dan Cross wrote: > > On Fri, Jul 6, 2018 at 1:43 AM Bakul Shah wrote: > > > [snip some very interesting and insightful comments] > > Mill ideas are very much worth exploring. It will be possible > > to build highly secure systems with it -- if it ever gets > > sufficiently funded and built! IMHO layers of mapping as with > > virtualization/containerization are not really needed for > > better security or isolation. > > Sure, with emphasis on that "if it ever gets sufficiently funded and > built!" part. :-) It sounds cool, but what to do on extant hardware? > Similarly with CHERI: they change nearly everything (including the > hardware). There is that! Mill made me realize per process virtual address space can be thrown out *without* compromising on security. This can be a win if you are building an N-core processor (for some large N). Extant processor architectures are not going to make efficient use of available gates for large N-core. And mulitcore efforts such as Tilera don't seem to do anything re security. This just seems like something worth experimenting with. > > 2. Is mmap() *really* the best we can do for mapping arbitrary resources > > > into an address space? > > > > I think this is fine. Even remote objects mmapping should > > work! > > > > Sure, but is it the *best* we can do? Subjectively, the interface is pretty > ugly, and we're forced into a multi-level store. Maybe that's OK; it sure > seems like we haven't come up with anything better. But I wonder whether > that's because we've found some local maxima in our pursuit of > functionality vs cost, or because we're so stuck in the model of > multi-level stores and mapping objects into address spaces that we can't > see beyond it. And it sure would be nice if the ergonomics of the > programming interface were better. I was using mmap as a generic term. See my previous message for an example -- read/write(fd, buffer, count). Here buffer is a cap that can be used to map remote data into local addr space. From gtaylor at tnetconsulting.net Sat Jul 14 07:13:42 2018 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Fri, 13 Jul 2018 15:13:42 -0600 Subject: [COFF] GoTEK SFR1M44-U100... Message-ID: <7c960ef8-00f3-7239-e853-f70ffbb4f40b@spamtrap.tnetconsulting.net> Does anyone have any experience with the GoTEK SFR1M44-U100 floppy drive emulator that reads ""images from a USB flash drive? Good? Bad? Indifferent? Run for the hills? -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3982 bytes Desc: S/MIME Cryptographic Signature URL: From crossd at gmail.com Tue Jul 17 00:49:52 2018 From: crossd at gmail.com (Dan Cross) Date: Mon, 16 Jul 2018 10:49:52 -0400 Subject: [COFF] Other OSes? In-Reply-To: <20180710053039.BA35C156E400@mail.bitblocks.com> References: <20180705055650.GA2170@minnie.tuhs.org> <20180708163150.0c9e1870@jabberwock.cb.piermont.com> <20180710053039.BA35C156E400@mail.bitblocks.com> Message-ID: On Tue, Jul 10, 2018 at 1:30 AM Bakul Shah wrote: > On Sun, 08 Jul 2018 22:44:23 -0400 Dan Cross wrote: > Dan Cross writes: > > On Sun, Jul 8, 2018 at 4:31 PM Perry E. Metzger > wrote: > > > > > [snip] > > > On Wed, 4 Jul 2018 23:40:36 -0700 Bakul Shah > > > wrote: > > > > - Capabilities (a number of OSes implemented them -- See Hank > > > > Levy's book: https://homes.cs.washington.edu/~levy/capabook/ > > > > > > A fascinating resurrection of capabilities in a kind of pleasant > > > Unixy-way can be found in Robert Watson's "Capsicum" add-ons for > > > FreeBSD and Linux. I wish it was more widely known about and adopted. > > > > > > > - Namespaces (plan9) > > > > > > A good choice I think. > > > > Interestingly, it was in talking with Ben Laurie about a potential > security > > model for Akaros that I realized the correspondence between Plan9-style > > namespaces and capabilities. > > I tend to think they are orthogonal. Namspaces map names to > objects -- they control what objects you can see. > In many ways, capabilities do the same thing: by not being able to name a resource, I cannot access it. _A_ way to think of plan9 style namespaces are as objects, with the names of files exposed by a particular filesystem being operations on those objects (I'm paraphrasing Russ Cox here, I think). In the plan9 world, most useful resources are implemented as filesystems. In that sense, not only does a particular namespace present a program with the set of resources it can access, but it also defines what I can do with those resources. Capabilities control what you can do with such an object. Cap > operations such as revoke, grant, attenunating rights > (subsetting the allowed ops) don't translate to namespaces. > I disagree. By way of example, I again reiterate the question I've asked several times now: there's a capability in e.g. Capsicum to allow a program to invoke the connect(2) system call: how does the capability system allow me to control the 5 tuple one might connect(2) to? I gave an example where, in the namespace world, I can interpose a proxy namespace that emulates the networking filesystem and restrict what I can do with the network. A cap is much like a file descriptor but it doesn't have to be > limited to just open files or directories. > > As an example, a client make invoke "read(fd, buffer, count)" > or "write(fd, buffer, count)". This call may be serviced by a > remote file server. Here "buffer" would be a cap on a memory > range within the client process -- we don't want the server to > have any more access to client's memory. When the fileserver > is ready to read or write, it has to securely arrange data > transfer to/from this memory range[1]. > So in other words, "buffer" is a name of an object and by being able to name that object, I can do something useful with it, subject to the limitations imposed by the object itself (e.g., `read(fd, buffer, count);` will fault if `buffer` points to read-only memory). You can even think of a cap as a "promise" -- you can perform > certain operations in future as opposed to right now. [Rather > than read from memory and send the data along with the write() > call, the server can use the buffer cap in future. The file > server in turn can pass on the buffer cap to a storage server, > there by reducing latency and saving a copy or two] > Similarly to the way that the existence of a file name in a process's namespace is in some ways a promise to be able to access some resource in the future. > Since Akaros had imported a lot of plan9 > > namespace code, we concluded that we already had the building blocks for > a > > capability-style security model with minimal additional work. I > > subsequently concluded that Capsicum wasn't strong enough to be complete. > > The point of my original comment was that if capabilities were > integral to a system (as opposed to being an add-on like > Capsicum) a much simpler Unix might've emerged. > > [1] In the old days we used copyin/copyout for this sort of > data transfer between a user process address space and the > kernel. This is analogous. > It's an aside, but it's astonishing to see how that's been bastardized by the use of ioctl() for such things in the Linux world. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bakul at bitblocks.com Tue Jul 17 02:59:08 2018 From: bakul at bitblocks.com (Bakul Shah) Date: Mon, 16 Jul 2018 09:59:08 -0700 Subject: [COFF] Capabilities (was Re: Other OSes? In-Reply-To: Your message of "Mon, 16 Jul 2018 10:49:52 -0400." References: <20180705055650.GA2170@minnie.tuhs.org> <20180708163150.0c9e1870@jabberwock.cb.piermont.com> <20180710053039.BA35C156E400@mail.bitblocks.com> Message-ID: <20180716165915.7D4EF156E408@mail.bitblocks.com> On Mon, 16 Jul 2018 10:49:52 -0400 Dan Cross wrote: > > On Tue, Jul 10, 2018 at 1:30 AM Bakul Shah wrote: > > > I tend to think they are orthogonal. Namspaces map names to > > objects -- they control what objects you can see. > > > > In many ways, capabilities do the same thing: by not being able to name a > resource, I cannot access it. > > _A_ way to think of plan9 style namespaces are as objects, with the names > of files exposed by a particular filesystem being operations on those > objects (I'm paraphrasing Russ Cox here, I think). In the plan9 world, most > useful resources are implemented as filesystems. In that sense, not only > does a particular namespace present a program with the set of resources it > can access, but it also defines what I can do with those resources. > > Capabilities control what you can do with such an object. Cap > > operations such as revoke, grant, attenunating rights > > (subsetting the allowed ops) don't translate to namespaces. > > > > I disagree. By way of example, I again reiterate the question I've asked > several times now: there's a capability in e.g. Capsicum to allow a program > to invoke the connect(2) system call: how does the capability system allow > me to control the 5 tuple one might connect(2) to? I gave an example where, > in the namespace world, I can interpose a proxy namespace that emulates the > networking filesystem and restrict what I can do with the network. You'd do something like "networkStackCap.connect(5-tuple)" where "networkStackCap" is a capability to a network stack service (or object) you were given. It could a proxy server or a NAT server that rewrites things, or a remote handle on a service on another node. A client starts with some given caps. It can gain new caps only through operations on its existing caps. Whether you represent the 5-tupe a path string or a tuple of 5 numbers or something else is upto how the network stack expects it. > A cap is much like a file descriptor but it doesn't have to be > > limited to just open files or directories. > > > > As an example, a client make invoke "read(fd, buffer, count)" > > or "write(fd, buffer, count)". This call may be serviced by a > > remote file server. Here "buffer" would be a cap on a memory > > range within the client process -- we don't want the server to > > have any more access to client's memory. When the fileserver > > is ready to read or write, it has to securely arrange data > > transfer to/from this memory range[1]. > > > > So in other words, "buffer" is a name of an object and by being able to > name that object, I can do something useful with it, subject to the > limitations imposed by the object itself (e.g., `read(fd, buffer, count);` > will fault if `buffer` points to read-only memory). Right but then in order to access contents of this named buffer object you need to pass it another named buffer object.... Where is the bottom turtle? In Unix/plan9 world buffer is strictly local and you have to play games (such as copyin/copyout or "meltdown" friendly mapping). One way to compare them is this: imagine in a game of adventure you are exploring a place with many passages and rooms. In the cap world some of these rooms have a lock and you need the right key to access them (and in turn they may lead to other locked or open rooms). You can only access those rooms for which you were given keys when you started or the keys you found during your exploration. In the plan9 world you can't even see the door of a room if you are not allowed access it. You need to read some magic scroll (mount) and new doors would magically appear! And there are some global objects that anyone can access (e.g. #e, #c, #k, #M, #s etc.). In contrast cap clients are inherently sandboxed. They can't tell if it is Live or it is Memorex. plan9 / unix filesystems control access to a collection of objects. You still need rwxrwxrwx mode bits, which are not finegrained enough -- the same for each group of users. Caps can control access to individual objects and you can subset this access (e.g. the equivalent of a valet key that don't allow access to the glove box or trunk, or a trunk key that doesn't allow driving a car). You can even revoke access (i.g. change the locks). None of these map to namespaces without adding much more complication. Even plan9's namespaces are rather expensive in practice. May be because they evolved from unix but for an arbitray objects things like owner, group, access and modification times etc. don't really matter. > > [1] In the old days we used copyin/copyout for this sort of > > data transfer between a user process address space and the > > kernel. This is analogous. > > > > It's an aside, but it's astonishing to see how that's been bastardized by > the use of ioctl() for such things in the Linux world. The unix filesystem abstraction is very good but it doesn't cover some uses hence it has become a leaky abstraction. In plan9 world you have ctl files but commands they accept are still arbitrary (specific to the object). The great invention of unix was a set of a few abstractions that served extremely well for a majority of tasks. These can be used with caps. From perry at piermont.com Tue Jul 17 10:33:43 2018 From: perry at piermont.com (Perry E. Metzger) Date: Mon, 16 Jul 2018 20:33:43 -0400 Subject: [COFF] GoTEK SFR1M44-U100... In-Reply-To: <7c960ef8-00f3-7239-e853-f70ffbb4f40b@spamtrap.tnetconsulting.net> References: <7c960ef8-00f3-7239-e853-f70ffbb4f40b@spamtrap.tnetconsulting.net> Message-ID: <20180716203343.097af5b9@jabberwock.cb.piermont.com> On Fri, 13 Jul 2018 15:13:42 -0600 Grant Taylor via COFF wrote: > Does anyone have any experience with the GoTEK SFR1M44-U100 floppy > drive emulator that reads ""images from a USB flash drive? > > Good? > Bad? > Indifferent? > Run for the hills? Interesting. So this lets you boot ancient hardware without needing to keep a supply of (perishable) floppies, eh? Neat. No, I have no idea if it actually works well, though the amazon reviews for it and things like it seem reasonable. (The negative ones are all "I have no idea how to use what I bought.") Perry -- Perry E. Metzger perry at piermont.com From wkt at tuhs.org Tue Jul 17 10:47:01 2018 From: wkt at tuhs.org (Warren Toomey) Date: Tue, 17 Jul 2018 10:47:01 +1000 Subject: [COFF] GoTEK SFR1M44-U100... In-Reply-To: <7c960ef8-00f3-7239-e853-f70ffbb4f40b@spamtrap.tnetconsulting.net> References: <7c960ef8-00f3-7239-e853-f70ffbb4f40b@spamtrap.tnetconsulting.net> Message-ID: <20180717004701.GA1870@minnie.tuhs.org> On Fri, Jul 13, 2018 at 03:13:42PM -0600, Grant Taylor via COFF wrote: > Does anyone have any experience with the GoTEK SFR1M44-U100 floppy drive > emulator that reads ""images from a USB flash drive? I'd try asking on the cctalk list as well: http://www.classiccmp.org/mailman/listinfo/cctalk Cheers, Warren From gtaylor at tnetconsulting.net Wed Jul 18 03:42:03 2018 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Tue, 17 Jul 2018 11:42:03 -0600 Subject: [COFF] GoTEK SFR1M44-U100... In-Reply-To: <20180717004701.GA1870@minnie.tuhs.org> References: <7c960ef8-00f3-7239-e853-f70ffbb4f40b@spamtrap.tnetconsulting.net> <20180717004701.GA1870@minnie.tuhs.org> Message-ID: <05645c32-aaff-5359-e48f-ef14acfa9bcf@spamtrap.tnetconsulting.net> On 07/16/2018 06:47 PM, Warren Toomey wrote: > I'd try asking on the cctalk list as well: I did exactly that. I've gotten quite a few good responses, and as expected, the thread has taken on it's own life about floppy drives. http://www.classiccmp.org/pipermail/cctalk/2018-July/040750.html -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3982 bytes Desc: S/MIME Cryptographic Signature URL: From gtaylor at tnetconsulting.net Wed Jul 18 03:48:28 2018 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Tue, 17 Jul 2018 11:48:28 -0600 Subject: [COFF] GoTEK SFR1M44-U100... In-Reply-To: <20180716203343.097af5b9@jabberwock.cb.piermont.com> References: <7c960ef8-00f3-7239-e853-f70ffbb4f40b@spamtrap.tnetconsulting.net> <20180716203343.097af5b9@jabberwock.cb.piermont.com> Message-ID: On 07/16/2018 06:33 PM, Perry E. Metzger wrote: > Interesting. So this lets you boot ancient hardware without needing to > keep a supply of (perishable) floppies, eh? Neat. Yep. > No, I have no idea if it actually works well, though the amazon reviews > for it and things like it seem reasonable. (The negative ones are all > "I have no idea how to use what I bought.") I have received my GoTEK and my initial impression is something between neutral and positive. I'm still running the stock firmware on it but plan to transition to FlashFloppy (?) after my new soldering iron arrives. (My last one didn't make a cross country move.) I don't know if the GoTEK is itself slow or if it's a result of what the computer was doing with it. — My only experience was trying to have a Compaq System Utility Partition back itself up to the GoTEK. The first ""disk worked without a problem. The backup routine fails complaining about a file after formatting the second disk. I suspect this may be more source than the destination. I'm sure there is a healthy dose of my ignorance of using the GoTEK. There was zero documentation that came with it. Online searches turn up a myriad of versions for the different models and it's all combining into a … cesspool seems like the proper word. I think I'm going to like the GoTEK as I get more experience with it. I am planing on trading out the firmware and moding it to add an OLED display so I'll have more information on what it's doing. -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3982 bytes Desc: S/MIME Cryptographic Signature URL: From wkt at tuhs.org Wed Jul 18 15:51:43 2018 From: wkt at tuhs.org (Warren Toomey) Date: Wed, 18 Jul 2018 15:51:43 +1000 Subject: [COFF] Asking a favour: 1 hour to review a teaching course Message-ID: <20180718055143.GA6927@minnie.tuhs.org> Hi all, I'm in need of a favour. I teach at a TAFE in Australia, equivalent to a polytechnic or a community college: below the level of a uni, with some theory and some hands-on skills. I need to find two industry people to review the assessment I have set for two courses. It would take about an hour of your time (I hope!). Would anybody be able to help me out? Many thanks! Warren From wkt at tuhs.org Thu Jul 19 06:05:46 2018 From: wkt at tuhs.org (Warren Toomey) Date: Thu, 19 Jul 2018 06:05:46 +1000 Subject: [COFF] Asking a favour: 1 hour to review a teaching course In-Reply-To: <20180718055143.GA6927@minnie.tuhs.org> References: <20180718055143.GA6927@minnie.tuhs.org> Message-ID: <20180718200546.GB23763@minnie.tuhs.org> On Wed, Jul 18, 2018 at 03:51:43PM +1000, Warren Toomey wrote: > I need to find two industry people to review the assessment I have set for > two courses. It would take about an hour of your time (I hope!). > Would anybody be able to help me out? I've had nearly a dozen offers to help out. I picked the first two and I've e-mailed them the details. Thank you to all those that offered their help! Cheers, Warren From crossd at gmail.com Tue Jul 24 02:41:46 2018 From: crossd at gmail.com (Dan Cross) Date: Mon, 23 Jul 2018 12:41:46 -0400 Subject: [COFF] [TUHS] Looking for final C compiler by Dennis Ritchie In-Reply-To: <20180723155552.GB19635@mcvoy.com> References: <8ECDA62D-1B54-4391-A226-D3E9ABEE4C07@planet.nl> <20180723155552.GB19635@mcvoy.com> Message-ID: [+COFF and TUHS to Bcc:] Okay, here we go: troff vs. TeX food fight. On Mon, Jul 23, 2018 at 11:56 AM Larry McVoy wrote: > I actually wacked a bunch of the Unix docs to make them look a little > better, I should see if I can find that. > I'd like to see that; presentation of some of those docs is getting a bit long in the tooth. I agree that roff is awesome, it's a bummer that Latex seems to be > the winner (which I think is purely because the roff/eqn/pic/etc > docs weren't widely available back in the day). > I have to disagree with this, however. TeX (and more specifically LaTeX) won out for technical writing because, frankly, it produces nicer output than *roff did. If I were writing a thesis or paper, I'd frankly rather use LaTeX or AMSLaTeX. I've used eqn to try and typeset math; it's OK if it's all that you've got. An nroff approximation for output to the terminal is kind of nifty, but beyond that it simply pales in comparison to TeX. I know that people have, and perhaps still do, typeset mathematics with eqn/neqn/troff, but given a choice between the two, I think you'd be hard pressed to find a mathematician who would choose troff over TeX; similarly with most technical papers. Now tbl and pic, those are pretty cool. Even then, GNU pic will output TeX for incorporation into other documents, and LaTeX has some very nice table-creation environments that largely subsume the functionality of tbl. Now don't get me wrong, I *like* roff, and I use it occasionally for one-off things, but for serious writing for publication I'd generally chose LaTeX. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From grog at lemis.com Tue Jul 24 13:52:06 2018 From: grog at lemis.com (Greg 'groggy' Lehey) Date: Tue, 24 Jul 2018 13:52:06 +1000 Subject: [COFF] roff vs. Tex (was: Looking for final C compiler by Dennis Ritchie) In-Reply-To: References: <8ECDA62D-1B54-4391-A226-D3E9ABEE4C07@planet.nl> <20180723155552.GB19635@mcvoy.com> Message-ID: <20180724035206.GA87618@eureka.lemis.com> On Monday, 23 July 2018 at 12:41:46 -0400, Dan Cross wrote: > On Mon, Jul 23, 2018 at 11:56 AM Larry McVoy wrote: > >> I agree that roff is awesome, it's a bummer that Latex seems to be >> the winner (which I think is purely because the roff/eqn/pic/etc >> docs weren't widely available back in the day). > > I have to disagree with this, however. TeX (and more specifically > LaTeX) won out for technical writing because, frankly, it produces > nicer output than *roff did. If I were writing a thesis or paper, > I'd frankly rather use LaTeX or AMSLaTeX. What about a book? Back in the late 1980s/early 1990s I used TeX and LaTeX, but when I started writing "Porting UNIX Software" (O'Reilly), they insisted on me using (g)roff with their proprietary macros. I resisted, of course, but it was clear that I didn't have much choice. And then I discovered that it was *so* much easier to use, and I've never used TeX again, though I made significant modifications to the macro set, to the point that it was no more O'Reilly than ms. My big issue was that it produces nicer output than TeX. In those days at any rate you could tell TeX output a mile off because of the excessive margins and the Computer Modern fonts. Neither is required, of course, but it seems that it must have been so much more difficult to change than it was with [gt]roff (or that the authors just didn't care). Still, TeX has one significant advantage over [gt]roff that I'm aware of: it adjusts paragraphs, not lines, and it seems that in some cases this give better looking layout. This reflects the situation in about 1993 or 1994. Maybe TeX has become more usable since then. Certainly LaTeX refuses to look at my old LaTeX source. Greg -- Sent from my desktop computer. Finger grog at lemis.com for PGP public key. See complete headers for address and phone numbers. This message is digitally signed. If your Microsoft mail program reports problems, please read http://lemis.com/broken-MUA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From lm at mcvoy.com Tue Jul 24 14:01:46 2018 From: lm at mcvoy.com (Larry McVoy) Date: Mon, 23 Jul 2018 21:01:46 -0700 Subject: [COFF] roff vs. Tex (was: Looking for final C compiler by Dennis Ritchie) In-Reply-To: <20180724035206.GA87618@eureka.lemis.com> References: <8ECDA62D-1B54-4391-A226-D3E9ABEE4C07@planet.nl> <20180723155552.GB19635@mcvoy.com> <20180724035206.GA87618@eureka.lemis.com> Message-ID: <20180724040146.GN28713@mcvoy.com> Back in 1998 or 1999 I was program chair for Linux expo. Which was no big deal, it meant I formatted the papers and got the page numbers right for the proceeedings. I nudged people towards troff and one guy went there. He said "this was so much easier than tex, it is faster, easier, why don't more people use this?" What he said. On Tue, Jul 24, 2018 at 01:52:06PM +1000, Greg 'groggy' Lehey wrote: > On Monday, 23 July 2018 at 12:41:46 -0400, Dan Cross wrote: > > On Mon, Jul 23, 2018 at 11:56 AM Larry McVoy wrote: > > > >> I agree that roff is awesome, it's a bummer that Latex seems to be > >> the winner (which I think is purely because the roff/eqn/pic/etc > >> docs weren't widely available back in the day). > > > > I have to disagree with this, however. TeX (and more specifically > > LaTeX) won out for technical writing because, frankly, it produces > > nicer output than *roff did. If I were writing a thesis or paper, > > I'd frankly rather use LaTeX or AMSLaTeX. > > What about a book? Back in the late 1980s/early 1990s I used TeX and > LaTeX, but when I started writing "Porting UNIX Software" (O'Reilly), > they insisted on me using (g)roff with their proprietary macros. I > resisted, of course, but it was clear that I didn't have much choice. > And then I discovered that it was *so* much easier to use, and I've > never used TeX again, though I made significant modifications to the > macro set, to the point that it was no more O'Reilly than ms. > > My big issue was that it produces nicer output than TeX. In those > days at any rate you could tell TeX output a mile off because of the > excessive margins and the Computer Modern fonts. Neither is required, > of course, but it seems that it must have been so much more difficult > to change than it was with [gt]roff (or that the authors just didn't > care). > > Still, TeX has one significant advantage over [gt]roff that I'm aware > of: it adjusts paragraphs, not lines, and it seems that in some cases > this give better looking layout. > > This reflects the situation in about 1993 or 1994. Maybe TeX has > become more usable since then. Certainly LaTeX refuses to look at my > old LaTeX source. > > Greg > -- > Sent from my desktop computer. > Finger grog at lemis.com for PGP public key. > See complete headers for address and phone numbers. > This message is digitally signed. If your Microsoft mail program > reports problems, please read http://lemis.com/broken-MUA -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From ralph at inputplus.co.uk Tue Jul 24 20:00:41 2018 From: ralph at inputplus.co.uk (Ralph Corderoy) Date: Tue, 24 Jul 2018 11:00:41 +0100 Subject: [COFF] roff vs. Tex (was: Looking for final C compiler by Dennis Ritchie) In-Reply-To: <20180724035206.GA87618@eureka.lemis.com> References: <8ECDA62D-1B54-4391-A226-D3E9ABEE4C07@planet.nl> <20180723155552.GB19635@mcvoy.com> <20180724035206.GA87618@eureka.lemis.com> Message-ID: <20180724100041.9B2B0214E0@orac.inputplus.co.uk> Hi Greg, > Still, TeX has one significant advantage over [gt]roff that I'm aware > of: it adjusts paragraphs, not lines, and it seems that in some cases > this give better looking layout. Gunnar Ritter's Hierloom troff, a descendent of OpenSolaris's, can adjust paragraphs rather than lines. http://heirloom.sourceforge.net/doctools.html Ali Gholami Rudi's neatroff, a troff re-implementation, adjusts paragraphs. Search for `paragraph-at-once' at http://litcave.rudi.ir/ or http://litcave.rudi.ir/neatroff.pdf There could be others. -- Cheers, Ralph. https://plus.google.com/+RalphCorderoy From cym224 at gmail.com Wed Jul 25 00:43:17 2018 From: cym224 at gmail.com (Nemo) Date: Tue, 24 Jul 2018 10:43:17 -0400 Subject: [COFF] roff vs. Tex (was: Looking for final C compiler by Dennis Ritchie) In-Reply-To: <20180724040146.GN28713@mcvoy.com> References: <8ECDA62D-1B54-4391-A226-D3E9ABEE4C07@planet.nl> <20180723155552.GB19635@mcvoy.com> <20180724035206.GA87618@eureka.lemis.com> <20180724040146.GN28713@mcvoy.com> Message-ID: On 24/07/2018, Larry McVoy wrote: > Back in 1998 or 1999 I was program chair for Linux expo. Which was > no big deal, it meant I formatted the papers and got the page numbers > right for the proceeedings. I nudged people towards troff and one > guy went there. He said "this was so much easier than tex, it is > faster, easier, why don't more people use this?" What he said. As a datum point, I know of very people using raw TeX -- actually none; everyone whom I know uses LaTeX (including me). And AST typesets all his books in troff for another datum point. What is sad is the IETF's decision to push people towards XML for RFCs. N. > > On Tue, Jul 24, 2018 at 01:52:06PM +1000, Greg 'groggy' Lehey wrote: >> On Monday, 23 July 2018 at 12:41:46 -0400, Dan Cross wrote: >> > On Mon, Jul 23, 2018 at 11:56 AM Larry McVoy wrote: >> > >> >> I agree that roff is awesome, it's a bummer that Latex seems to be >> >> the winner (which I think is purely because the roff/eqn/pic/etc >> >> docs weren't widely available back in the day). >> > >> > I have to disagree with this, however. TeX (and more specifically >> > LaTeX) won out for technical writing because, frankly, it produces >> > nicer output than *roff did. If I were writing a thesis or paper, >> > I'd frankly rather use LaTeX or AMSLaTeX. >> >> What about a book? Back in the late 1980s/early 1990s I used TeX and >> LaTeX, but when I started writing "Porting UNIX Software" (O'Reilly), >> they insisted on me using (g)roff with their proprietary macros. I >> resisted, of course, but it was clear that I didn't have much choice. >> And then I discovered that it was *so* much easier to use, and I've >> never used TeX again, though I made significant modifications to the >> macro set, to the point that it was no more O'Reilly than ms. >> >> My big issue was that it produces nicer output than TeX. In those >> days at any rate you could tell TeX output a mile off because of the >> excessive margins and the Computer Modern fonts. Neither is required, >> of course, but it seems that it must have been so much more difficult >> to change than it was with [gt]roff (or that the authors just didn't >> care). >> >> Still, TeX has one significant advantage over [gt]roff that I'm aware >> of: it adjusts paragraphs, not lines, and it seems that in some cases >> this give better looking layout. >> >> This reflects the situation in about 1993 or 1994. Maybe TeX has >> become more usable since then. Certainly LaTeX refuses to look at my >> old LaTeX source. >> >> Greg >> -- >> Sent from my desktop computer. >> Finger grog at lemis.com for PGP public key. >> See complete headers for address and phone numbers. >> This message is digitally signed. If your Microsoft mail program >> reports problems, please read http://lemis.com/broken-MUA > > > > -- > --- > Larry McVoy lm at mcvoy.com > http://www.mcvoy.com/lm > _______________________________________________ > COFF mailing list > COFF at minnie.tuhs.org > https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff From perry at piermont.com Thu Jul 26 07:24:40 2018 From: perry at piermont.com (Perry E. Metzger) Date: Wed, 25 Jul 2018 17:24:40 -0400 Subject: [COFF] roff vs. Tex (was: Looking for final C compiler by Dennis Ritchie) In-Reply-To: <20180724035206.GA87618@eureka.lemis.com> References: <8ECDA62D-1B54-4391-A226-D3E9ABEE4C07@planet.nl> <20180723155552.GB19635@mcvoy.com> <20180724035206.GA87618@eureka.lemis.com> Message-ID: <20180725172440.0a27e0e9@jabberwock.cb.piermont.com> On Tue, 24 Jul 2018 13:52:06 +1000 Greg 'groggy' Lehey wrote: > On Monday, 23 July 2018 at 12:41:46 -0400, Dan Cross wrote: > > On Mon, Jul 23, 2018 at 11:56 AM Larry McVoy wrote: > My big issue was that it produces nicer output than TeX. In those > days at any rate you could tell TeX output a mile off because of the > excessive margins and the Computer Modern fonts. Neither is > required, of course, but it seems that it must have been so much > more difficult to change than it was with [gt]roff (or that the > authors just didn't care). It's a single command most of the time to change font. \usepackage{palatino} for example. (That's at the start of many of my documents.) It's also a single command to change your margins. Similar complexity, a dozen chars and you're done. I don't love TeX's command language, it's gross, but it's not hard to do simple things like that, and the typesetting results are kind of remarkable if you know what you're doing. The most beautiful books in the world (by a lot) are typeset in modern TeX. I don't even think you can do microtypography in any troff that I've seen, and forget things like having both lining and text figures in the same document. Perry -- Perry E. Metzger perry at piermont.com From grog at lemis.com Thu Jul 26 14:22:20 2018 From: grog at lemis.com (Greg 'groggy' Lehey) Date: Thu, 26 Jul 2018 14:22:20 +1000 Subject: [COFF] roff vs. Tex (was: Looking for final C compiler by Dennis Ritchie) In-Reply-To: <20180725172440.0a27e0e9@jabberwock.cb.piermont.com> References: <8ECDA62D-1B54-4391-A226-D3E9ABEE4C07@planet.nl> <20180723155552.GB19635@mcvoy.com> <20180724035206.GA87618@eureka.lemis.com> <20180725172440.0a27e0e9@jabberwock.cb.piermont.com> Message-ID: <20180726042220.GC87618@eureka.lemis.com> On Wednesday, 25 July 2018 at 17:24:40 -0400, Perry E. Metzger wrote: > On Tue, 24 Jul 2018 13:52:06 +1000 Greg 'groggy' Lehey > wrote: >> On Monday, 23 July 2018 at 12:41:46 -0400, Dan Cross wrote: >>> On Mon, Jul 23, 2018 at 11:56 AM Larry McVoy wrote: >> My big issue was that it produces nicer output than TeX. In those >> days at any rate you could tell TeX output a mile off because of the >> excessive margins and the Computer Modern fonts. Neither is >> required, of course, but it seems that it must have been so much >> more difficult to change than it was with [gt]roff (or that the >> authors just didn't care). > > It's a single command most of the time to change font. > > \usepackage{palatino} > > for example. (That's at the start of many of my documents.) That's the case now, I assume. I've just dragged out the TeXbook (February 1989), LaTeX user's guide and reference (referring to LaTeX 2.06 (April 1986)) and "TeX for the Impatient" (1990). None of them mention this command, and after 20 minutes of searching I wasn't able to find any reference in any of them to any font family except CM, and thus also no way to change to one. About the only titbit I found was that you needed separate commands for each font at each size, and that this was impractical. A far cry from troff's .ps command. > I don't love TeX's command language, it's gross, but it's not hard > to do simple things like that, and the typesetting results are kind > of remarkable if you know what you're doing. The most beautiful > books in the world (by a lot) are typeset in modern TeX. I don't > even think you can do microtypography in any troff that I've seen, > and forget things like having both lining and text figures in the > same document. It's possible, and I've done it (even simulating the “TeX" symbol). But then, I've written my own macros, and I found it easier than messing with TeX. Still, this isn't a TeX-bashing session. I was just explaining why I changed. Greg -- Sent from my desktop computer. Finger grog at lemis.com for PGP public key. See complete headers for address and phone numbers. This message is digitally signed. If your Microsoft mail program reports problems, please read http://lemis.com/broken-MUA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From bakul at bitblocks.com Thu Jul 26 16:38:09 2018 From: bakul at bitblocks.com (Bakul Shah) Date: Wed, 25 Jul 2018 23:38:09 -0700 Subject: [COFF] roff vs. Tex (was: Looking for final C compiler by Dennis Ritchie) In-Reply-To: Your message of "Thu, 26 Jul 2018 14:22:20 +1000." <20180726042220.GC87618@eureka.lemis.com> References: <8ECDA62D-1B54-4391-A226-D3E9ABEE4C07@planet.nl> <20180723155552.GB19635@mcvoy.com> <20180724035206.GA87618@eureka.lemis.com> <20180725172440.0a27e0e9@jabberwock.cb.piermont.com> <20180726042220.GC87618@eureka.lemis.com> Message-ID: <20180726063816.EB9E2156E400@mail.bitblocks.com> On Thu, 26 Jul 2018 14:22:20 +1000 Greg 'groggy' Lehey wrote: > > That's the case now, I assume. I've just dragged out the TeXbook > (February 1989), LaTeX user's guide and reference (referring to LaTeX > 2.06 (April 1986)) and "TeX for the Impatient" (1990). None of them > mention this command, and after 20 minutes of searching I wasn't able > to find any reference in any of them to any font family except CM, and > thus also no way to change to one. About the only titbit I found was > that you needed separate commands for each font at each size, and that > this was impractical. A far cry from troff's .ps command. You may be used to an earlier version of LaTeX (LaTeX 2e?). Things are considerably better now. I didn't use LaTeX much for over a decade but now, with editors such as TeXWorks and faster machines, rendering is quite fast. There are also some webbased TeX editors that are quite good (and show rendered page in one pane). Many more fonts are available now. And there is plenty of help available at tex.stackoverflow.com. For short simple documents I generally use MarkDown or AsciiDoc. Their light markup means source files are quire readable in a terminal window, and they still render well to an html page or pdf (and you can include images). For more complex editing tasks I switch to LaTeX (or XeLaTeX). And Unicode has helped for Indic scripts. I don't have to use transliterated Roman with diacritics for Indian languages (hard to read/write in this form for a native speaker). From perry at piermont.com Thu Jul 26 22:22:54 2018 From: perry at piermont.com (Perry E. Metzger) Date: Thu, 26 Jul 2018 08:22:54 -0400 Subject: [COFF] roff vs. Tex (was: Looking for final C compiler by Dennis Ritchie) In-Reply-To: <20180726042220.GC87618@eureka.lemis.com> References: <8ECDA62D-1B54-4391-A226-D3E9ABEE4C07@planet.nl> <20180723155552.GB19635@mcvoy.com> <20180724035206.GA87618@eureka.lemis.com> <20180725172440.0a27e0e9@jabberwock.cb.piermont.com> <20180726042220.GC87618@eureka.lemis.com> Message-ID: <20180726082254.2ee99276@jabberwock.cb.piermont.com> On Thu, 26 Jul 2018 14:22:20 +1000 Greg 'groggy' Lehey wrote: > On Wednesday, 25 July 2018 at 17:24:40 -0400, Perry E. Metzger > wrote: > > On Tue, 24 Jul 2018 13:52:06 +1000 Greg 'groggy' Lehey > > wrote: > >> On Monday, 23 July 2018 at 12:41:46 -0400, Dan Cross wrote: > >>> On Mon, Jul 23, 2018 at 11:56 AM Larry McVoy > >>> wrote: > >> My big issue was that it produces nicer output than TeX. In > >> those days at any rate you could tell TeX output a mile off > >> because of the excessive margins and the Computer Modern fonts. > >> Neither is required, of course, but it seems that it must have > >> been so much more difficult to change than it was with [gt]roff > >> (or that the authors just didn't care). > > > > It's a single command most of the time to change font. > > > > \usepackage{palatino} > > > > for example. (That's at the start of many of my documents.) > > That's the case now, I assume. I've just dragged out the TeXbook > (February 1989), LaTeX user's guide and reference (referring to > LaTeX 2.06 (April 1986)) and "TeX for the Impatient" (1990). None > of them mention this command, It's true, books that are thirty years old and over might not be the best reference for the software. Even back then, though, the commands involved were pretty simple. They just weren't mentioned in the books you read, probably because in 1986 and the like there weren't an abundance of fonts available. (I never use the CMR fonts except in my CV. There, it's a signal that it was written in TeX.) Perry -- Perry E. Metzger perry at piermont.com From lm at mcvoy.com Fri Jul 27 11:04:12 2018 From: lm at mcvoy.com (Larry McVoy) Date: Thu, 26 Jul 2018 18:04:12 -0700 Subject: [COFF] roff vs. Tex (was: Looking for final C compiler by Dennis Ritchie) In-Reply-To: <20180724100041.9B2B0214E0@orac.inputplus.co.uk> References: <8ECDA62D-1B54-4391-A226-D3E9ABEE4C07@planet.nl> <20180723155552.GB19635@mcvoy.com> <20180724035206.GA87618@eureka.lemis.com> <20180724100041.9B2B0214E0@orac.inputplus.co.uk> Message-ID: <20180727010412.GJ9644@mcvoy.com> Wasn't the Hierloom troff Jeorg's work? Did Gunnar take over? On Tue, Jul 24, 2018 at 11:00:41AM +0100, Ralph Corderoy wrote: > Hi Greg, > > > Still, TeX has one significant advantage over [gt]roff that I'm aware > > of: it adjusts paragraphs, not lines, and it seems that in some cases > > this give better looking layout. > > Gunnar Ritter's Hierloom troff, a descendent of OpenSolaris's, can > adjust paragraphs rather than lines. > http://heirloom.sourceforge.net/doctools.html > > Ali Gholami Rudi's neatroff, a troff re-implementation, adjusts > paragraphs. Search for `paragraph-at-once' at http://litcave.rudi.ir/ > or http://litcave.rudi.ir/neatroff.pdf > > There could be others. > > -- > Cheers, Ralph. > https://plus.google.com/+RalphCorderoy > _______________________________________________ > COFF mailing list > COFF at minnie.tuhs.org > https://minnie.tuhs.org/cgi-bin/mailman/listinfo/coff -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From grog at lemis.com Fri Jul 27 11:28:38 2018 From: grog at lemis.com (Greg 'groggy' Lehey) Date: Fri, 27 Jul 2018 11:28:38 +1000 Subject: [COFF] roff vs. Tex (was: Looking for final C compiler by Dennis Ritchie) In-Reply-To: <20180726082254.2ee99276@jabberwock.cb.piermont.com> References: <8ECDA62D-1B54-4391-A226-D3E9ABEE4C07@planet.nl> <20180723155552.GB19635@mcvoy.com> <20180724035206.GA87618@eureka.lemis.com> <20180725172440.0a27e0e9@jabberwock.cb.piermont.com> <20180726042220.GC87618@eureka.lemis.com> <20180726082254.2ee99276@jabberwock.cb.piermont.com> Message-ID: <20180727012838.GD87618@eureka.lemis.com> On Thursday, 26 July 2018 at 8:22:54 -0400, Perry E. Metzger wrote: > On Thu, 26 Jul 2018 14:22:20 +1000 Greg 'groggy' Lehey > wrote: >> On Wednesday, 25 July 2018 at 17:24:40 -0400, Perry E. Metzger >> wrote: >>> It's a single command most of the time to change font. >>> >>> \usepackage{palatino} >>> >>> for example. (That's at the start of many of my documents.) >> >> That's the case now, I assume. I've just dragged out the TeXbook >> (February 1989), LaTeX user's guide and reference (referring to >> LaTeX 2.06 (April 1986)) and "TeX for the Impatient" (1990). None >> of them mention this command, > > It's true, books that are thirty years old and over might not be the > best reference for the software. They were the ideal reference when I used TeX. That was my point. Clearly it has improved, but too late for me. > Even back then, though, the commands involved were pretty > simple. They just weren't mentioned in the books you read, probably > because in 1986 and the like there weren't an abundance of fonts > available. They were available for groff, and O'Reilly gave me a set of Garamond Light (their fonts of the day). I briefly considered using them for TeX, but that ended up in the "too hard" basket. > (I never use the CMR fonts except in my CV. There, it's a signal > that it was written in TeX.) Exactly. And of course my aversion to TeX documents relates to exactly that look. Probably there are many documents that I just don't recognize. Greg -- Sent from my desktop computer. Finger grog at lemis.com for PGP public key. See complete headers for address and phone numbers. This message is digitally signed. If your Microsoft mail program reports problems, please read http://lemis.com/broken-MUA -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From cym224 at gmail.com Fri Jul 27 12:52:37 2018 From: cym224 at gmail.com (Nemo) Date: Thu, 26 Jul 2018 22:52:37 -0400 Subject: [COFF] roff vs. Tex (was: Looking for final C compiler by Dennis Ritchie) In-Reply-To: <20180727012838.GD87618@eureka.lemis.com> References: <8ECDA62D-1B54-4391-A226-D3E9ABEE4C07@planet.nl> <20180723155552.GB19635@mcvoy.com> <20180724035206.GA87618@eureka.lemis.com> <20180725172440.0a27e0e9@jabberwock.cb.piermont.com> <20180726042220.GC87618@eureka.lemis.com> <20180726082254.2ee99276@jabberwock.cb.piermont.com> <20180727012838.GD87618@eureka.lemis.com> Message-ID: On 26/07/2018, Greg 'groggy' Lehey wrote: > On Thursday, 26 July 2018 at 8:22:54 -0400, Perry E. Metzger wrote: [...] >> (I never use the CMR fonts except in my CV. There, it's a signal >> that it was written in TeX.) > > Exactly. And of course my aversion to TeX documents relates to > exactly that look. Probably there are many documents that I just > don't recognize. Amusingly (or not), a colleague told me that he only takes seriously documents in CMR because all his professional journals are in CMR. Personally, I use Times for legal folk and Palatino for others -- when not forced to use Stutter (a.k.a. Word). that is. N.