Re: [linux-audio-dev] A "best" event delegation strategy?

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] A "best" event delegation strategy?
From: David Olofson (david_AT_olofson.net)
Date: Fri May 30 2003 - 15:21:57 EEST


On Friday 30 May 2003 13.39, Lukas Degener wrote:
> Hi list,
> Here is a propblem that is propably already solved in another
> context, so i would like to know some opinions on it:
>
> I am trying to implement a (hopefully:-) ) simple general-purpose
> event delegation architecture.
> The actual application will propably be something like a network of
> modules that can do arbitrary filtering, generation and
> manipulation of midi events in real time.

"Simple", "general purpose" and "real time" are terms that don't mix
very well. :-)

> As propably different kinds of external event sources, like several
> midi ports, maybe joystick device and of course a gui, are
> involved, how would one efficiently organize the delegation of
> events passed between the modules, so that the everything is still
> thread-safe?

A rather common and simple approach is to just read all sources once
per buffer cycle, in non-blocking mode. Block only on reads or writes
from/to the "timing master" audio device, or what have you.

The disadvantage with this is that unless the APIs and devices you
read from provide timestamps, events will be quantized to buffer
cycle boundaries. This might be ok for some applications, but it
won't work well for synths and the like, unless you can run them with
less than some 1-3 milliseconds per buffer.

If you want to do your own timestamping, here's a (relatively) simple
approach:

        1) Run one high priority thread for each event input
           device. (Or one for all devices, if you can block
           on all of them.) Priority should be *higher* than
           the engine thread, or there's no point with this
           arrangement.

        2) Have the thread(s) timestamp incoming events and
           pass them on to the engine thread via lock-free
           FIFOs or similar thread safe non-blocking interfaces.

        3) Have the engine thread check all FIFOs (or what
           have you) once per buffer cycle, pretty much as if
           they were direct interfaces to drivers, used in
           non-blocking mode.

(The "engine thread" here would usually be the real time audio
thread.)

> The three main ideas i can currently think of are:
>
> A: don't do it at all, that is, everything is implemented as simple
> subject/observer patterns, so that the communication is a pure
> module-to-module thing.
> Mutexes et al, would have to be managed by each individual
> plugin.
>
> B: use a global, mutexed event queue. This could be a priority
> queue for time-stamped events, or a simple FIFO.

I'd be careful with those mutexes. When a lower priority thread locks
a resource, it prevents higher priority threads from getting at that
resource.

Simple version: If you have your GUI talk to the audio thread via a
lock protected object, you're asking for trouble.

> C: use local queues. As above, but for each individual module.

Same problems as B, basically, but here, it's easier to get around the
problem by using lock-free FIFOs or similar lock-free constructs.

> Each of the above aproaches seems to have its advantages and
> disatvantages. E.g. if queues are used, this would as far as i an
> judge, make it easy to have feedback cycles.

Yes, but feedback cycles mean you have either latency (buffer
size/cycle time dependent) or a chance of locking up your engine with
infinite loops. Just keep that in mind. :-)

> OTOH this would propably introduce some overhead which aproach A
> wouldn't have.

> C would propably involve a single thread for each
> module, or a global "clock" thread that periodicaly calls an
> "process_queue" method on each module.

JACK proves that it it's indeed possible. It's not trivial to get
right, and it's certainly not portable, unless we're talking about
real time operating systems. (Which often have non-standard APIs and
all sorts of peculiarities, so there are still portability issues.)

Anyway, why would you do this with *threads*? Multiple
threads/processes is definitely the hardest design to get to work
reliably in real time, so one would normally want to avoid it as far
as possible.

It's a different story when you want to integrate applications running
in separate processes. That's why we have JACK. No other way to do
that, without very deep kernel hacks.

> Is there a general "best" way to do this?

Not really, as it depends on what you want to do, but pretty much
every (reliable) real time audio application does something like what
I described above, with a single audio thread and lock-free or at
least non-blocking communication.

You'll have to be more specific to get a more specific answer. :-)

//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`-----------------------------------> http://audiality.org -'
   --- http://olofson.net --- http://www.reologica.se ---


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Fri May 30 2003 - 15:28:48 EEST