[TUHS] signals and blocked in I/O

Bakul Shah bakul at bitblocks.com
Sat Dec 2 09:42:15 AEST 2017


On Fri, 01 Dec 2017 15:09:34 -0800 Larry McVoy <lm at mcvoy.com> wrote:
Larry McVoy writes:
> On Fri, Dec 01, 2017 at 11:03:02PM +0000, Ralph Corderoy wrote:
> > Hi Larry,
> > 
> > > > So OOM code kills a (random) process in hopes of freeing up some
> > > > pages but if this process is stuck in diskIO, nothing can be freed
> > > > and everything grinds to a halt.
> > >
> > > Yep, exactly.
> > 
> > Is that because the pages have been dirty for so long they've reached
> > the VM-writeback timeout even though there's no pressure to use them for
> > something else?  Or has that been lengthened because you don't fear
> > power loss wiping volatile RAM?
> 
> I'm tinkering with the pageout daemon so I'm trying to apply memory
> pressure.  I have 10 25GB processes (25GB malloced) and the processes just
> walk the memory over and over.  This is on a 256GB main memory machine
> (2 socket haswell, 28 cpus, 28 1TB SSDs, on loan from Netflix).

How many times do processes walk their memory before this condition
occurs? 

So what may be happening is that a process references a page,
it page faults, the kernel finds its phys page has been paged
out, so it looks for a free page and once a free page is
found, the process will block on page in. Or if there is no
free page, it has to wait until some other dirty page is paged
out (but this would be a different wait queue).  As more and
more processes do this, the system runs out of all free pages.

Can you find out how many processes are waiting under what
conditions, how long they wait and how these queue lengths are
changing over time?  You can use a ring buffer to capture last
2^N measurements and dump them in the debugger when everything
grinds to a halt.

> It's the old "10 pounds of shit in a 5 pound bag" problem, same old stuff,
> just a bigger bag.
> 
> The problem is that OOM can't kill the processes that are the problem,
> they are stuck in disk wait.  That's why I started asking why can't you
> kill a process that's in the middle of I/O.

The OS equivalent of RED (random early drop) would be if a
process kills itself. e.g. when some critical metric crosses a
highwater mark.

Another option would be to return with an EFAULT and the
process can either kill itself or free up the page or
something. [I have used EFAULT to dyanmically allocate *more*
pages but no reason why the same can be used to free up
memory!]



More information about the TUHS mailing list