Subject: Re: [linux-audio-dev] priority inversion & inheritance
From: Kai Vehmanen (kai.vehmanen_AT_wakkanet.fi)
Date: Thu Jul 11 2002 - 15:43:07 EEST
On Thu, 11 Jul 2002, Martijn Sipkema wrote:
>> There\'re two separate problems here. Constant nframes might be required
>> even if the application supports engine iteration from outside sources
>> (ie. all audio processing happens inside the process() callback).
>> Ecasound is one example of this.
> But why is this needed? The only valid argument I heard for this is
> optimization of frequency domain algorithm latency. I suggested a
One simple reason, whether a valid design or not, is that there's a lot of
code that handles audio in constant size blocks. For instance if you have
a mixer element in the signal graph, it is just easier if all the inputs
deliver the same amount of data at every iteration.
In ecasound the trickiest part is that the i/o subsystem uses buffer
objecs to store audio. Each buffer object contains nframes of audio.
These audio slots are allocated before real-time processing is started and
cannot be resized on-the-fly.
But the important point is that for low-latency processing, the design
described above has no real negative sides and I see no need to change it.
With the current JACK implementation this design delivers optimal results
both in terms of efficiency and latency... _if_ I ignore the
non-const-nframes issue.
If I want to add correct support for the current API, I either have to a)
change the engine design (basicly from using ringbuffers of audio blocks
into ringbuffers of audio samples), which involves making changes to
majority of the interfaces in ecasound's codebase (multiple MBs of code!),
or b), make a compromise on efficicient&latency and add a intermediary
buffer between the JACK process() and ecasound engine.
Now, as I've said before, if this is just me, then that is not reason
enough (-> keep the non-constant-nframes), but if there are lots of
projects like ecasound, this will a serious issue. The worst case scenario
is that majority of apps implement JACK support in a suboptimal way. And
so far nobody has come up with any real good arguments for
non-const-nframes. So to me non-const-nframes just means lots and lots of
extra work without any benefits.
> capability interface for JACK as in EASI where it is possible to ask
> whether nframes is constant. The application must still handle the case
> where it is not.
Yup, this is one solution.
>> read/write ops or driven by select/poll. In this case the easiest way to
>> add JACK support is to put a FIFO between the engine and the process()
>> callbacks. Although priority inheritance could be used here, it\'s doesn\'t
> If the FIFO uses a mutex, it should use some priority inversion prevention
> mechanism, unless both threads run at the same priority. Otherwise there
> is a potential unbounded block on the mutex.
The two threads must run with SCHED_FIFO as they both need to complete
their cycle before the next soundcard interrupt. As Linux is not a
real-time OS (and probably even if it was), priority inheritance would
only solve half of the problem. Calls to disk i/o, network, user i/o and
other subsystems block without deterministic worst-case bounds. No amount
of priority (given by priority inheritance) will save your butt if the
disk head is physically in the wrong place when you need it. On a
dedicated system you can reserve a separate disk for the audio i/o or
prevent other processes from using the disk, but in a GPOS like Linux, it
is always possible that some other process can affect the kernel
subsystems (for instance, access a file and cause the disk head to move at
the worst possible time).
The correct solution is to partition your audio code into real-time
capable and non-realtime parts and make sure that the non-real-time part
is never ever able to block the real-time part. In essence this very close
to the RT<->non-RT separation advocated by RTLinux, just done on a
different level (between interrupt-driven SCHED_FIFO code and
timer-interrupt/scheduler driver SCHED_OTHER code).
> There is hardware that just interrupts at a constant rate. With this hardware
> the frames that or ready isn\'t exactly constant. You might assume some value,
> but if it isn\'t exactly correct then you\'ll drift.
Yes, the interrupt intervals and how much data actually is available when
the software is woken up are two different things. But as the nframes
count in any case has an upper bound, you are not free to directly use the
avail_samples count anyways. And natural choice is to always use the
period_count. I've posted one alternative approach to this to
jackit-devel, but at least to me it really didn't seem like a viable
approach.
>> And nframes should be equal to the period size.
> This hardware doesn\'t have a period size in samples, but time based.
> And so I think it is wrong to have the driver export an interface that
> is not used by the hardware.
Hmm, ok, now we're talking! :) Still, you can always calculate the
theoretical period size in samples and use that as the nframes
count. Like I already mentioned above, in any case you need to honor the
upper bound.
> And besides that, I\'m pretty sure there is hardware that doesn\'t use power
> of 2 sized periods. Should that be a requirment too?
Not a problem as there's no 2^x limitation.
-- http://www.eca.cx Audio software for Linux!
This archive was generated by hypermail 2b28 : Thu Jul 11 2002 - 16:31:20 EEST