[TUHS] Quotas - did anyone ever use them?

Rico Pajarola rp at servium.ch
Sat Jun 1 01:55:25 AEST 2019


On Fri, May 31, 2019 at 2:50 AM Arthur Krewat <krewat at kilonet.net> wrote:

> On 5/30/2019 8:21 PM, Nelson H. F. Beebe wrote:
> > Several list members report having used, or suffered under, filesystem
> > quotas.
> >
> > At the University Utah, in the College of Science, and later, the
> > Department of Mathematics, we have always had an opposing view:
> >
> >       Disk quotas are magic meaningless numbers imposed by some bozo
> >       ignorant system administrator in order to prevent users from
> >       getting their work done.
>
> You've never had people like me on your systems ;) - But yeah...
>
> > For the last 15+ years, our central fileservers have run ZFS on
> > Solaris 10 (SPARC, then on Intel x86_64), and for the last 17 months,
> > on GNU/Linux CentOS 7.
> >
> I do the same with ZFS - limit the individual filesystems with "zfs set
> quota=xxx" so the entire pool can't be filled. I assign a zfs filesystem
> to an individual user in /export/home and when they need more, they let
> me know. Various monitoring scripts tell me when a filesystem is
> approaching 80%, and I either just expand it on my own because of the
> user's usage, or let them know they are approaching the limit.
>
> Same thing with Netbackup Basic Disk pools in a common ZFS pool. I can
> adjust them as needed, and Netbackup sees the change almost immediately.
>
> At home, I did this with my kids ;) - Samba and zfs quota on the
> filesystem let them know how much room they had.
>
> art k.
>
> PS: I'm starting to move to FreeBSD and ZFS for VMware datastores, the
> performance is outstanding over iSCSI on 10Gbe - (which Solaris 11's
> COMSTAR is not apparently very good at especially with small block
> sizes). I have yet to play with Linux and ZFS but would appreciate to
> hear (privately, if it's not appropriate for the list) your experiences
> with it.
>
At home I use ZFS (on Linux) exclusively for all data I care about (and
also for data I don't care about). I have a bunch of pools ranging from 5TB
to 45TB with RAIDZ2 (overall about 50 drives), in various hardware setups
(SATA, SAS, some even via iSCSI). Performance is not what I'm used to on
Solaris, but in this case, convenience wins over speed. I never lost any
data, even though with that amount of disks, there's always a broken disk
somewhere. The on-disk format is compatible with FreeBSD and Solaris (I
have successfully moved disks between OSes), so you're not "locked in".
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://minnie.tuhs.org/pipermail/tuhs/attachments/20190531/b3b5ed93/attachment.html>


More information about the TUHS mailing list