[TUHS] off-topic list

Steffen Nurpmeso steffen at sdaoden.eu
Tue Jun 26 01:51:03 AEST 2018


Grant Taylor via TUHS wrote in <09ee8833-c8c0-8911-751c-906b737209b7 at spa\
mtrap.tnetconsulting.net>:
 |On 06/23/2018 04:38 PM, Steffen Nurpmeso wrote:
 |> Absolutely true.  And hoping that, different to web browsers, no let me 
 |> call it pseudo feature race is started that results in less diversity 
 |> instead of anything else.
 |
 |I'm not sure I follow.  I feel like we do have the choice of MUAs or web 
 |browsers.  Sadly some choices are lacking compared to other choices. 
 |IMHO the maturity of some choices and lack there of in other choices 
 |does not mean that we don't have choices.

Interesting.  I do not see a real choice for me.  Netsurf may be
one but cannot do Javascript once i looked last, so this will not
work out for many, and increasing.  Just an hour ago i had a fight
with Chromium on my "secure box" i use for the stuff which knows
about bank accounts and passwords in general, and (beside the fact
it crashes when trying to prepare PDF printouts) it is just
unusable slow.  I fail to understand (well actually i fail to
understand a lot of things but) why you need a GPU process for
a 2-D HTML page.  And then this:

  libGL error: unable to load driver: swrast_dri.so
  libGL error: failed to load driver: swrast
  [5238:5251:0625/143535.184020:ERROR:bus.cc(394)] Failed to connect to the bus: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
  [5238:5279:0625/143536.401172:ERROR:bus.cc(394)] Failed to connect to the bus: Could not parse server address: Unknown address type (examples of valid types are "tcp" and on UNIX "unix")
  libGL error: unable to load driver: swrast_dri.so
  libGL error: failed to load driver: swrast
  [5293:5293:0625/143541.299106:ERROR:gl_implementation.cc(292)] Failed to load /usr/lib/chromium/swiftshader/libGLESv2.so: Error loading shared library /usr/lib/chromium/swiftshader/libGLESv2.so: No such file or directory
  [5293:5293:0625/143541.404733:ERROR:viz_main_impl.cc(196)] Exiting GPU process due to errors during initialization
[..repeating..]
  [5238:5269:0625/143544.747819:ERROR:browser_gpu_channel_host_factory.cc(121)] Failed to launch GPU process.
  [5238:5238:0625/143544.749814:ERROR:gpu_process_transport_factory.cc(1009)] Lost UI shared context.

I mean, it is not there, ok?
I cannot really imagine how hard it is to write a modern web
browser, with the highly integrative DOM tree, CSS and Javascript
and such, however, and the devil is in the details anyway.
I realize that Chromium does not seem to offer options for Cookies
and such - i have a different elder browser for that.
All of that is off-topic anyway, but i do not know why you say
options.

 |> If there is the freedom for a decision.  That is how it goes, yes.  For 
 ..
 |> But it is also nice to see that there are standards which were not fully 
 |> thought through and required backward incompatible hacks to fill \
 |> the gaps.
 |
 |Why is that a "nice" thing?

Good question.  Then again, young men (and women) need to have
a chance to do anything at all.  Practically speaking.  For
example we see almost thousands of new RFCs per year.  That is
more than in the first about two decades all in all.  And all is
getting better.

 |> Others like DNS are pretty perfect and scale fantastic.  Thus.
 |
 |Yet I frequently see DNS problems for various reasons.  Not the least of 
 |which is that many clients do not gracefully fall back to the secondary 
 |DNS server when the primary is unavailable.

Really?  Not that i know of.  Resolvers should be capable to
provide quality of service, if multiple name servers are known,
i would say.  This is even RFC 1034 as i see, SLIST and SBELT,
whereas the latter i filled in from "nameserver" as of
/etc/resolv.conf, it should have multiple, then.  (Or point to
localhost and have a true local resolver or something like
dnsmasq.)

I do see DNS failures via Wifi but that is not the fault of DNS,
but of the provider i use.

P.S.: actually the only three things i have ever hated about DNS,
and i came to that in 2004 with EDNS etc. all around yet, is that
backward compatibly has been chosen for domain names, and
therefore we gained IDNA, which is a terribly complicated
brainfuck thing that actually caused incompatibilities, but these
where then waved through and ok.  That is absurd.

(If UTF-8 is too lengthy, i would have used UTF-7 for this, the
average octet/character ratio is still enough for most things on
the internet.  An EDNS bit could have been used for extended
domain name/label lengths, and if that would have been done 25
years ago we were fine out in practice.  Until then registrars and
administrators had to decide whether they want to use extended
names or not.  And labels of 25 characters are not common even
today, nowhere i ever looked.)

And the third is DNSSEC, which i read the standard of and said
"no".  Just last year or the year before that we finally got DNS
via TCP/TLS and DNS via DTLS, that is, normal transport security!
Twenty years too late, but i really had good days when i saw those
standards flying by!  Now all we need are zone administrators
which publish certificates via DNS and DTLS and TCP/TLS consumers
which can integrate those in their own local pool (for at least
their runtime).

 |> Ah, it has become a real pain.  It is ever so astounding how much HTML5 
 |> specifics, scripting and third party analytics can be involved for \
 |> a page 
 |> that would have looked better twenty years ago, or say whenever CSS 2.0 
 |> came around.
 |
 |I'm going to disagree with you there.  IMHO the standard is completely 
 |separate with what people do with it.

They have used <object> back in the 90s, an equivalent should have
made it into the standard back then.  I mean this direction was
clear, right?, maybe then the right people would have more
influence still.  Not that it would matter, all those many people
from the industry will always sail sair own regatta and use what
is hip.

 |> Today almost each and every commercial site i visit is in practice 
 |> a denial of service attack.  For me and my little box at least, and 
 |> without gaining any benefit from that!
 |
 |I believe those webmasters have made MANY TERRIBLE choices and have 
 |ended up with the bloat that they now have.  -  I do not blame the 
 |technology.  I blame what the webmasters have done with the technology.
 |
 |Too many people do not know what they are doing and load "yet another 
 |module" to do something that they can already do with what's already 
 |loaded on their page.  But they don't know that.  They just glue things 
 |togehter until they work.  Even if that means that they are loading the 
 |same thing multiple times because multiple of their 3rd party components 
 |loads a common module themselves.  This is particularly pernicious with 
 |JavaScript.

Absolutely.  And i am watching car industry as they present
themselves in Germany pretty closely, and they are all a nuisance,
in how they seem to track each other and step onto their
footsteps.  We had those all-script-based slide effects, and they
all need high definition images and need to load them all at once,
non-progressively.  It becomes absurd if you need to download 45
megabytes of data, and >43 of those are an image of a car in
nature with a very clean model wearing an axe.  This is "cool"
only if you have a modern environment and the data locally
available.  Im my humble opinion.

  ...
 |> I actually have no idea of nmh, but i for one think the sentence of 
 |> the old BSD Mail manual stating it is an "intelligent mail processing 
 |> system, which has a command syntax reminiscent of ed(1) with lines 
 |> replaced by messages" has always been a bit excessive.  And the fork i 
 |> maintain additionally said it "is also usable as a mail batch language, 
 |> both for sending and receiving mail", adding onto that.
 |
 |What little I know about the MH type mail stores and associated 
 |utilities are indeed quite powerful.  I think they operate under the 
 |premise that each message is it's own file and that you work in 
 |something akin to a shell if not your actual OS shell.  I think the MH 
 |commands are quite literally unix command that can be called from the 
 |unix shell.  I think this is in the spirit of simply enhancing the shell 
 |to seem as if it has email abilities via the MH commands.  Use any 
 |traditional unix text processing utilities you want to manipulate email.
 |
 |MH has always been attractive to me, but I've never used it myself.

I actually hate the concept very, very much ^_^, for me it has
similarities with brainfuck.  I could not use it.

 |> You say it.  In the research Unix nupas mail (as i know it from Plan9) 
 |> all those things could have been done with a shell script and standard 
 |> tools like grep(1) and such.
 |
 |I do say that and I do use grep, sed, awk, formail, procmail, cp, mv, 
 |and any number of traditional unix file / text manipulation utilities on 
 |my email weekly.  I do this both with the Maildir (which is quite 
 |similar to MH) on my mail server and to the Thumderbird message store 
 |that is itself a variant of Maildir with a file per message in a 
 |standard directory structure.
 |
 |> Even Thunderbird would simply be a maybe even little nice graphical 
 |> application for display purposes.
 |
 |The way that I use Thunderbird, that's exactly what it is.  A friendly 
 |and convenient GUI front end to access my email.
 |
 |> The actual understanding of storage format and email standards would 
 |> lie solely within the provider of the file system.
 |
 |The emails are stored in discrete files that themselves are effectively 
 |mbox files with one message therein.
 |
 |> Now you use several programs which all ship with all the knowledge.
 |
 |I suppose if you count greping for a line in a text file as knowledge of 
 |the format, okay.
 |
 |egrep "^Subject: " message.txt
 |
 |There's nothing special about that.  It's a text file with a line that 
 |looks like this:
 |
 |Subject: Re: [TUHS] off-topic list

Except that this will work only for all-english, as otherwise
character sets come into play, text may be in a different
character set, mail standards may impose a content-transfer
encoding, and then what you are looking at is actually a database,
not the data as such.

This is what i find so impressive about that Plan9 approach, where
the individual subparts of the message are available as files in
the filesystem, subjects etc. as such, decoded and available as
normal files.  I think this really is .. impressive.

 |> I for one just want my thing to be easy and reliable controllable via 
 |> a shell script.
 |
 |That's a laudable goal.  I think MH is very conducive to doing that.

Maybe.

 |> You could replace procmail (which is i think perl and needs quite some 
 |> perl modules) with a monolithic possibly statically linked C program.
 |
 |I'm about 95% certain that procmail is it's own monolithic C program. 
 |I've never heard of any reference to Perl in association with procmail. 
 |Are you perhaps thinking of a different local delivery agent?

Oh really?  Then likely so, yes.  I have never used this.

 |> Then.  With full error checking etc.  This is a long road ahead, for 
 |> my thing.
 |
 |Good luck to you.

Thanks, eh, thanks.  Time will bring.. or not.

 |> So ok, it does not, actually.  It chooses your "Grant Taylor via TUHS" 
 |> which ships with the TUHS address, so one may even see this as an 
 |> improvement to DMARC-less list replies, which would go to TUHS, with or 
 |> without the "The Unix Heritage Society".
 |
 |Please understand, that's not how I send the emails.  I send them with 
 |my name and my email address.  The TUHS mailing list modifies them.
 |
 |Aside:  I support the modification that it is making.

It is of course not the email that leaves you no more.  It is not
just headers are added to bring the traceroute path.  I do have
a bad feeling with these, but technically i do not seem to have an
opinion.

 |> I am maybe irritated by the 'dkim=fail reason="signature verification 
 |> failed"' your messages produce.  It would not be good to filter out 
 |> failing DKIMs, at least on TUHS.
 |
 |Okay.  That is /an/ issue.  But I believe it's not /my/ issue to solve.
 |
 |My server DKIM signs messages that it sends out.  From everything that 
 |I've seen and tested (and I actively look for problems) the DKIM 
 |signatures are valid and perfectly fine.
 |
 |That being said, the TUHS mailing list modifies message in the following 
 |ways:
 |
 |1)  Modifies the From: when the sending domain uses DMARC.
 |2)  Modifies the Subject to prepend "[TUHS] ".
 |3)  Modifies the body to append a footer.
 |
 |All three of these actions modify the data that receiving DKIM filters 
 |calculate hashes based on.  Since the data changed, obviously the hash 
 |will be different.
 |
 |I do not fault THUS for this.
 |
 |But I do wish that TUHS stripped DKIM and associated headers of messages 
 |going into the mailing list.  By doing that, there would be no data to 
 |compare to that wouldn't match.
 |
 |I think it would be even better if TUHS would DKIM sign messages as they 
 |leave the mailing list's mail server.

Well, this adds the burden onto TUHS.  Just like i have said.. but
you know, more and more SMTP servers connect directly via STARTSSL
or TCP/TLS right away.  TUHS postfix server does not seem to do so
on the sending side -- you know, i am not an administor, no
earlier but on the 15th of March this year i realized that my
Postfix did not reiterate all the smtpd_* variables as smtp_*
ones, resulting in my outgoing client connections to have an
entirely different configuration than what i provided for what
i thought is "the server", then i did, among others

  smtpd_tls_security_level = may
 +smtp_tls_security_level = $smtpd_tls_security_level

But if TUHS did, why should it create a DKIM signature?
Ongoing is the effort to ensure SMTP uses TLS all along the route,
i seem to recall i have seen RFCs pass by which accomplish that.
Or only drafts??  Hmmm.

  ...
 |I don't know if PGP or S/MIME will ever mandate anything about headers 
 |which are structurally outside of their domain.
 |
 |I would like to see an option in MUAs that support encrypted email for 
 |something like the following:
 |
 |    Subject:  (Subject in encrypted body.)
 |
 |Where the encrypted body included a header like the following:
 |
 |    Encrypted-Subject: Re: [TUHS] off-topic list
 |
 |I think that MUAs could then display the subject that was decrypted out 
 |of the encrypted body.

Well S/MIME does indeed specify this mode of encapsulating the
entire message including the headers, and enforce MUAs to
completely ignore the outer envelope in this case.  (With a RFC
discussing problems of this approach.)  The BSD Mail clone
i maintain does not support this yet, along with other important
aspects of S/MIME, like the possibility to "self-encrypt" (so that
the message can be read again, a.k.a. that the encrypted version
lands on disk in a record, not the plaintext one).  I hope it will
be part of the OpenPGP, actually privacy rewrite this summer.

--steffen
|
|Der Kragenbaer,                The moon bear,
|der holt sich munter           he cheerfully and one by one
|einen nach dem anderen runter  wa.ks himself off
|(By Robert Gernhardt)



More information about the TUHS mailing list