Several list members report having used, or suffered under, filesystem
quotas.
At the University Utah, in the College of Science, and later, the
Department of Mathematics, we have always had an opposing view:
Disk quotas are magic meaningless numbers imposed by some bozo
ignorant system administrator in order to prevent users from
getting their work done.
Thus, in my 41 years of systems management at Utah, we have not had a
SINGLE SYSTEM with user disk quotas enabled.
We have run PDP-11s with RT-11, RSX, and RSTS, PDP-10s with TOPS-20,
VAXes with VMS and BSD Unix, an Ardent Titan, a Stardent, a Cray
EL/94, and hundreds of Unix workstations from Apple, DEC, Dell, HP,
IBM, NeXT, SGI, and Sun with numerous CPU families (Alpha, Arm, MC68K,
SPARC, MIPS, NS 88000, PowerPC, x86, x86_64, and maybe others that I
forget at the moment).
For the last 15+ years, our central fileservers have run ZFS on
Solaris 10 (SPARC, then on Intel x86_64), and for the last 17 months,
on GNU/Linux CentOS 7.
Each ZFS dataset gets its space from a large shared pool of disks, and
each dataset has a quota: thus, space CAN fill up in a given dataset,
so that some users might experience a disk-full situation. In
practice, that rarely happens, because a cron job runs every 20
minutes, looking for datasets that are nearly full, and giving them a
few extra GB if needed. Affected users in a average of 10 minutes or
so will no longer see disk-full problems. If we see serious imbalance
in the sizes of previously similar-sized datasets, we manually move
directory trees between datasets to achieve a reasonable balance, and
reset the dataset quotas.
We make nightly ZFS snapshots (hourly for user home directories), and
send the nightlies to an off-campus server in a large datacenter, and
we write nightly filesystem backs to a tape robot. The tape technology
generations have evolved through 9-track, QIC, 4mm DAT, 8mm DAT, DLT,
LTO-4, LTO-6, and perhaps soon, LTO-8.
Our main fileserver talks through a live SAN FibreChannel mirror to
independent storage arrays in two different buildings.
Thus, we always have two live copies of all data, and third far-away
live copy that is no more than 24 hours old.
Yes, we do see runaway output files from time to time, and an
occasional student (among currently more than 17,000 accounts) who
uses an unreasonable amount of space. In such cases, we deal with the
job, or user, involved, and get space freed up; other users remain
largely remain unaware of the temporary space crisis.
The result of our no-quotas policy is that few of our users have ever
seen a disk-full condition; they just get on with their work, as they,
and we, expect them to do.
-------------------------------------------------------------------------------
- Nelson H. F. Beebe Tel: +1 801 581 5254 -
- University of Utah FAX: +1 801 581 4148 -
- Department of Mathematics, 110 LCB Internet e-mail: beebe(a)math.utah.edu -
- 155 S 1400 E RM 233 beebe(a)acm.org beebe(a)computer.org -
- Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ -
-------------------------------------------------------------------------------
i used the fair share schedular whilst a sysadmin of a small cray at UNSW. being an expensive machine the various departments who paid for it wanted, well, their fair share.
in a different job i had a cron job that restricted Sybase backend engines to a subset of the cpus on an big SGI box during peak hours, at night sybase had free reign of all cpus.
anyone did anything similar?
-Steve
> From: KatolaZ
> I remember a 5MB quota at uni when I was an undergrad, and I definitely
> remember when it was increased to 10MB :)
Light your cigar with disk blocks!
When I was in high school, I had an account on the school's computer, a
PDP-11/20 running RSTS, with a single RF11 disk (well, technically, an RS11
drive on an RF11 controller). For those whose jaw didn't bounce off the floor,
reading that, the RS11 was a fixed-head disk with a total capacity of 512KB
(1024 512-byte blocks).
IIRC, my disk quota was 5 blocks. :-)
Noel
----- Forwarded message from meljmel-unix(a)yahoo.com -----
Warren,
Thanks for your help. To my amazement in one day I received
8 requests for the documents you posted on the TUHS mailing
list for me. If you think it's appropriate you can post that
everything has been claimed. I will be mailing the Unix TMs
and other papers to Robert Swierczek <rmswierczek(a)gmail.com>
who said he will scan any one-of-a-kind items and make them
available to you and TUHS. The manuals/books will be going
to someone else who very much wanted them.
Mel
----- End forwarded message -----
> That photo is not Belle, or at least not the Belle machine that the article
is about.
The photo shows the piece-sensing (by tuned resonant circuits)
chess board that Joe Condon built before he and Ken built the
harware version of Belle that reigned as world computer chess
champion for several years beginning in 1980 and became the
first machine to earn a master rating.
Doug
> From: "John P. Linderman"
> Brian interviewing Ken
Ah, thanks for that. I had intended going (since I've never met Ken), but
alas, my daughter's family had previously scheduled to visit that weekend, so
I couldn't go.
The 'grep' story was amusing, but historically, probably the most valuable
thing was the detail on the origins of B - DMR's paper on early C ("The
Development of the C Language") mentions the FORTRAN, but doesn't give the
detail on why that got canned, and B appeared instead.
Noel
Decades ago there was an interpreted C in an X10 or X11 app, I believe it
came from the UK. And maybe it wasn't X11, maybe it was Sunview?
Whatever it was the author didn't like the bundled scrollbars and had
their own custom made one.
You could set breakpoints like a debugger and then go look around at state.
Does anyone else remember that app and what it was called?
Bakul Shah:
This could've been avoided if there was a convention about
where to store per user per app settings & possibly state. On
one of my Unix machines I have over 200 dotfiles.
====
Some, I think including Ken and Dennis, might have argued
that real UNIX programs aren't complex enough to need
lots of configuration files.
Agree with it or not, that likely explains why the Research
stream never supplied a better convention about where to
store such files. I do remember some general debate in the
community (e.g. on netnews) about the matter back in the
early 1980s. One suggestion I recall was to move all the
files to subdirectory `$HOME/...'. Personally I think
$HOME/conf would have been better (though I lean toward
the view that very few programs should need such files
anyway).
But by then BSD had spread the convention of leaving
`hidden' files in $HOME had spread too far to call
back. It wouldn't surprise me if some at Berkeley
would rather have moved to a cleaner convention, just
as the silly uucp-baud-rate flags were removed from
wc, but the cat was already out of the bag and too
hard to stuff back in.
On the Ubuntu Linux systems I help run these days, there
is a directory $HOME/.config. The tree within has 192
directories and 187 regular files. I have no idea what
all those files are for, but from the names, most are
from programs I may have run once years ago to test
something, or from programs I run occasionally but
have no context I care about saving. The whole tree
occupies almost six megabytes, which seems small
by current standards, but (as those on this list
certainly know) in the early 1980s it was possible
to run a complete multi-user UNIX system comfortably
from a single 2.5MB RK05 disk.
Norman Wilson
Toronto ON