Re: [linux-audio-dev] NPTL jack+ardour: large memlock required

From: Lee Revell <rlrevell@email-addr-hidden-job.com>
Date: Mon Dec 12 2005 - 08:35:47 EET

On Mon, 2005-12-12 at 11:09 +1030, Jonathan Woithe wrote:
> > On Thu, 2005-12-08 at 16:50 +1030, Jonathan Woithe wrote:
> > > > On Fri, 2005-11-25 at 11:10 +1030, Jonathan Woithe wrote:
> > > > > I have since discovered (thanks to strace) that the problem lies in an
> > > > > mmap2() call which requests a size of around 8MB. It appears to be
> > > > > part of the NPTL pthread_create() function. The error returned by
> > > > > mmap2() (EAGAIN) indicates either a locked file or "too much locked
> > > > > memory" (according to the manpage). Because this is an anonymous map
> > > > > the problem must have been the latter. Increasing the "locked memory"
> > > > > limit from 20MB to 40MB made jackd start without having to resort to
> > > > > the LD_ASSUME_KERNEL hack.
> > > >
> > > > I stumbled across the same problem a few weeks ago working on another
> > > > project. This is glibc allocating an 8MB (!) stack for each thread. It
> > > > gets the default thread stack size from getrlimit(). With mlockall()
> > > > you will go OOM really fast like that.
> > > >
> > > > The real fix is for JACK to set a reasonable thread stack size in
> > > > jack_create_thread. You can work around the problem by appending this
> > > > line to /etc/security/limits.conf:
> > > >
> > > > * hard stack 512
> > > >
> > > > for a 512KB stack.
> > >
> > > I tried this. Although this reduced the size of the mlock calls I found I
> > > still needed an mlock limit of at least 40MB for jack and ardour to start.
> > > However, even with this loading a large session into ardour failed to load
> > > with no error given (beyond the fact that the loading failed).
> > >
> > > You mentioned in another post that you're working on a "proper" fix for
> > > jack. I'll wait for this to be implemented before digging into this
> > > issue further since the correct fix might change things. Drop me a line
> > > when a fix has made it into CVS and I'll resume my testing.
> >
> > Um, all my fix will do is reduce the thread stack size to something more
> > reasonable, say 256KB per thread. You're still going to need an mlock
> > limit of more than 40MB to do anything useful.
>
> How large is reasonable? I'm finding that for large (2 hours, 12 tracks)
> ardour sessions I'm needing an mlock limit of in excess of 128 MB when
> running under NPTL (I use 256 MB to be safe); otherwise ardour can't even
> open the session, let alone allow any work to be done. When NPTL *isn't* in
> use an mlock limit of the order of 40 MB (or even 20 MB) is perfectly fine
> for the same ardour session and doesn't even result in any error/warning
> messages regarding mlock failures.

OK. I'm not sure what causes this. But I have two ideas.

I have looked at the NPTL (2.3.5) source, and unless you use
pthread_attr_setstack() to manage the stacks yourself, it maintains an
internal cache of thread stacks that could be related to this problem -
I didn't look hard enough to figure out at which point it starts to
free() things - possibly the cache is allowed to grow very big. And
since NPTL does not care as it assumes that thread stacks are pageable I
can see where with mlock()ed stacks this could get quite big.

Also, I've found that if a thread is created joinable (the default) and
exits but is never joined, the parent thread will leak ~200 bytes of
memory for each thread until you go OOM. Creating the threads detached
prevents the memory leak.

Lee
Received on Mon Dec 12 12:15:05 2005

This archive was generated by hypermail 2.1.8 : Mon Dec 12 2005 - 12:15:06 EET