On Sat, Jul 4, 2020 at 8:07 PM Dave Horsfall <dave@horsfall.org> wrote:
Aren't holes part of the file system semantics? 
Exactly - and that was the problem with NFS.   Consider two write operations.  Remember each op is a complete operation with a seek to where it's going.  If the first fails, but the error is not reported (NFS returns errors on close), the second operations seek over the failed write -- UNIX puts zeros in the file.   File closes later and the size is fine of course.  Oh yeah, whoever bothered to check for errors on close (like the traditional SCCS or RCS commands)?  

Later to try to read your file back -- it will have a bunch of zeros. 
As Larry says, running a simple checksum could catch a lot of these.

Anyway, I'm going to be good and lay off a diatribe on NFS.  It sort of worked 'good enough.'   But I will say other systems (like AFS) were much better, in practice, but it lost the war.