On Sun, Jul 5, 2020 at 5:06 PM Clem Cole <clemc@ccc.com> wrote:
On Sun, Jul 5, 2020 at 4:42 PM John Cowan <cowan@ccil.org> wrote:
I always used the design principle "Write locally, read over NFS".  
This was the basic idea of AFS.  Originally, the CMU folks did whole file caching, but by AFS 4.0 time, they had a Locus token manager (think DLM) that scaled really well so partial caching was allowed.  It actually made a small disk system possible.  What tended to happen, on your first boot, of course, you had to fill /bin and lot of heavily used directories.   But what happened is that your system quickly had only the files you really needed on the local disk. - the ones you were writing, and the few you used over and over.

FWIW: I know a couple of people that still run it.  I ran it until a few years ago when I switched NAS units just for cost reasons.

There was a neat paper out of CERN a few years ago about how they're turning down their AFS (now OpenAFS) cells. https://iopscience.iop.org/article/10.1088/1742-6596/898/6/062040/pdf

It seems that the idea of a big, shared, distributed file namespace is sadly disappearing. I feel like most of the web-based replacements are not as seamlessly integrated with my preferred toolset as what they're replacing, but I've also become more and more acutely aware that I am not the target audience for those things.

Certainly, real-time collaboration via e.g. Google Drive is pretty amazing and very dynamic, particularly when paired with e.g. real-time video chat, but it also forces one into a particular model of interaction that I've spent most of the last three decades consciously avoiding but now find no escape from.

        - Dan C.