On Sat, Jul 04, 2020 at 09:43:04PM -0400, Clem Cole wrote:
On Sat, Jul 4, 2020 at 8:07 PM Dave Horsfall
<dave(a)horsfall.org> wrote:
Aren't holes part of the file system
semantics?
Exactly - and that was the problem with NFS. Consider two write
operations. Remember each op is a complete operation with a seek to where
it's going. If the first fails, but the error is not reported (NFS returns
errors on close), the second operations seek over the failed write -- UNIX
puts zeros in the file. File closes later and the size is fine of
course. Oh yeah, whoever bothered to check for errors on close (like the
traditional SCCS or RCS commands)?
Later to try to read your file back -- it will have a bunch of zeros.
So I've encountered lots of holes in NFS files where there shouldn't be
any. So it is/was a thing. But that said, I can't remember a single
case of encountering that on Sun's campus. I don't know if my memory
is failing me, but I do know that when I left Sun and started working
with other NFS implementations, yeah, lots of problems. Somehow Sun
got it right where other people didn't.
The point I'm trying to make is that I don't think NFS was broken by
design, it worked when it was Sun servers and Sun clients. Sun's
entire campus, 10's of thousands of machines, used NFS. Sun would
have screeched to a halt if NFS didn't work reliably all the time.
So it was possible to get it to work.
My guess is that other people didn't understand the "rules" and did
things that created problems. Sun's clients did understand and did
not push NFS in ways that would break it.
My memory may not be the greatest but I can still remember being
astonished when I first ran into people saying NFS didn't work.
It worked great for Sun and Sun's customers.