Re: [LAD] Some questions about the Jack callback

From: Fons Adriaensen <fons@email-addr-hidden>
Date: Sat Sep 20 2014 - 22:34:17 EEST

On Sat, Sep 20, 2014 at 01:30:43PM -0400, Paul Davis wrote:

> On Sat, Sep 20, 2014 at 1:07 PM, Len Ovens <len@email-addr-hidden> wrote:

> > Is it possible to deal with this in two threads? In the case of generating
> > audio, there is no "waiting" for audio to come in to process and the
> > processing could start for the next cycle right after the callback rather
> > than waiting for the next callback(thinking multicore processors). The
> > outgoing audio is put into storage and the callback only puts it into the
> > audio stream. Effectively, the generation thread would be running in sort
> > of a freerun mode, filling up a buffer as there is room to do so.
>
>
> this doesn't work for instruments.
>
> the audio being generated has to change as rapidly as possibly following
> the receipt of some of event that changes things (e.g. "start playing a new
> note" or "lower the cutoff frequency of the filter"). fast response/low
> latency for instruments means that all generation happens as-needed, not
> pre-rendered, not "in storage".
>
> for streaming audio playback (from disk or the net or whatever) this
> approach is fine and is equivalent of many buffering schemes.

Right. But even a player has inputs: commands from the user such
as stop, start, reposition, and the user will expect these to be
acted upon with reasonable delay. This limits the amount of time
that can be pre-read and buffered. Unless you are prepared to
'revise' the buffering, and handle waiting for data separately.

In the case of instruments such a scheme can be used provided there
is a well-defined delay between the calculation thread and the one
actually delivering the data to the sound card (which will be the
process callback). For instance you could prefill the buffer with
N periods of silence, allowing the calculation thread as much delay
(and adding the same amount of latency). But you still need to handle
the case where the process() callback doesn't find enough samples in
the buffer. The thing to do in that case it to replace the missing
data with silence, remember how much samples you have replaced this
way, and skip the same number in the buffer when they become available.
That way the latency doesn't change in case of a buffer underflow.
The simplest way to do this is to use a lock-free buffer that allows
its read and write counts to be modified independent of overflow or
underflow. That is, the buffer keeps the 'logical' state independent
of the data being actually read or written. Such a buffer is used
for example in the zita-*2* apps which are all designed to have and
maintain a well-defined latency.

Ciao,

-- 
FA
A world of exhaustive, reliable metadata would be an utopia.
It's also a pipe-dream, founded on self-delusion, nerd hubris
and hysterically inflated market opportunities. (Cory Doctorow)
_______________________________________________
Linux-audio-dev mailing list
Linux-audio-dev@email-addr-hidden
http://lists.linuxaudio.org/listinfo/linux-audio-dev
Received on Sun Sep 21 00:15:02 2014

This archive was generated by hypermail 2.1.8 : Sun Sep 21 2014 - 00:15:02 EEST