[TUHS] Charles Forsyth on putting Unix on a diet.

George Michaelson ggm at algebras.org
Fri Oct 27 08:12:32 AEST 2017


I worked with Charles in the University of York. he's the guy who said
to me "how can it be simple, if the printout is half an inch thick"
about SMTP.

Charles also enjoyed himself: fancy dress, he had a full on Sun-King
court suit made, to go with his huge head of hair. This, in york, a
city of red-brick terraced houses, back-to-backs, and beer. Louis XIV
walking down t' road for a pint...

I think when he implemented streams in one or two pages of text from
the BLTJ article, and then sighed at the size of STREAMS when it came
to life.. says it all.  he was very very strong on a reductionist
'size matters' world view. His editor of choice in 1984 was ed, with a
command macro to repaint -22 + 22 about the . line.

(I hasten to add, Charles and I worked in totally different areas, and
I use the term "worked" in regards to myself with some trepidation)

On Fri, Oct 27, 2017 at 3:27 AM, Larry McVoy <lm at mcvoy.com> wrote:
> On Tue, Oct 24, 2017 at 10:51:17PM -0400, Dan Cross wrote:
>> I've always enjoyed this paper; recently I found occasion to thumb
>> through it again. I thought I'd pass it on; I'm curious what some on
>> the list think about this given their first-hand knowledge of relevant
>> history (Larry, I'm looking at you; especially with respect his
>> comments on the VM system).
>>
>>         - Dan C.
>>
>> http://www.terzarima.net/doc/taste.pdf
>
> OK, I've read it.  This could have been a great paper if he had included
> some performance results.  As it is, I'm sorry that his ideas didn't take
> hold, or at least get some discussion.
>
> The paging stuff is neat but it doesn't address the file system page cache
> so far as I tell.  It's process based so unless the process had the file
> mmap-ed it wouldn't take care of it.  And mmap didn't exist at the time,
> so I'm not sure how he handed the page cache.
>
> He was fixated on code size, and yeah, on 4MB machines, you need to
> be even on 8 you need to be (at Sun we called EMACS Eight Megs And
> Constantly Swapping).  But I'd like to know if his paging scheme worked.
> I went through a period where I wrote over a dozen different pageout
> daemons in an effort to do a better job; none did.  They had certain
> use cases where they did better but then other work loads made them
> perform worse.  That global pageout daemon is still around, it's
> depressingly hard to do better than that.  If he succeeded it would
> be nice to have numbers.
>
> His pageout scheme could have all forward pointers if he had them in
> the vnode along with the process.
>
> I agree with him on fork, there is no reason not do do vfork in fork
> that I can think of (other than wnj's hack of putting stats in the
> parent process by the child in csh and that was just gross).
>
> His comments on the file system switch vs VFS sort of miss the point
> that the VFS was put in place to allow file systems that don't have
> Unix semantics (NFS being the biggest example).  But I agree that
> there perhaps could have been a better approach, this is why it
> would have been nice to have his stuff get a broader audience.
>
> I disagree on the streams/STREAMS stuff, that shit has no place in
> anything that wants performance.  I'm working with netflix, trying
> to do better than this:
>
> https://news.ycombinator.com/item?id=15367421
>
> which is a discussion of their writeup of how they managed to push
> ~100Gbit/sec of movies on a single machine.  Sticking STREAMS in the
> middle of that was a bad idea back in the day and a worse idea today.
>
> But all in all, an interesting paper.  It's pretty amazing how many of
> the people in the references I know, lots of them I've worked with, Steve
> Kleiman was my mentor at Sun, Rosenthal and I had tons of OS discussions.
> I toyed with going to HP Labs to work with John Wilkes, etc.  Definitely
> a trip down memory lane and makes me feel super lucky to have gotten to
> work with people of that caliber.
>
> Thanks for the pointer.
>
> --lm



More information about the TUHS mailing list