Re: [linux-audio-dev] Realtime restrictions when running multiple audio apps .. Was: Re: disksampler ...

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] Realtime restrictions when running multiple audio apps .. Was: Re: disksampler ...
From: Benno Senoner (sbenno_AT_gardena.net)
Date: Sun Jul 16 2000 - 17:05:48 EEST


On Sun, 16 Jul 2000, Juhana Sadeharju wrote:
> >From: Stefan Westerfeld <stefan_AT_space.twc.de>
> >
> >don't want to fully merge apps in the soundserver. Running the complete
> >Cubase with all plugins, X11 I/O, GUI etc. inside your three-threaded
> >server might be possible for one Cubase, but at least if you start the
> >second or another app, you'll run into serious trouble with duplicated
> >symbols, toolkit event loops, etc.
>
> I'm sure Benno didn't mean to do that.

Juhana, you are right :-)

I mean only the audio/midi/disk related parts run within the server,
the GUI and other non realtime-critical stuff runs in a separate
process which communicates via IPC with the engine.

>
> aRts works good for KDE but it doesn't work that good for LAD.
> Benno's approach is good because it is minimal and therefore offers
> a plenty of room for developers. aRts looks too complicated and too
> fixed for development. What if one wants to write his own soft synth
> similar to aRts...

The key is that we need a _VERY_ small and lean audio dispatching
"kernel" the RT soundserver which imposes certain restrictions on
the "plugins" (audio threads, midi threads , disk threads), just like LADSPA
does.
I will try to write a RFC which describes the protocol (has to be very very
simple without object oriented features and so on) and a prototype
using the disk sampler as testbed.

Without avoiding mutexes , pipes , sockets etc and using only lock-free
FIFOs / ringbuffers, I would not have achieved the 2.1ms latencies in my
app.
The problem is that we process ONLY 32 samples at time and have to do that
within 700usecs ( 0.7msec) with is a VERY short time.
Including object oriented features, and higher level features, we will not be
able to deliver such low latencies.
And the pro-audio folks simply do not want to give up these achievable
latencies. Latency is everything for live stuff.

So I know Stephan is asking why we start from scratch instead of building on
aRts, but I agree with Juhana that only by using a very small audio "kernel",
we will able to achieve our goal.

The problem of this model is that it forces you to design the apps in a
different way and it is not as straightforward as using the normal programming
model.
Plus many developer which do not need the performance do not want
to use this programming model since they do not benefit about having 2.1ms
latency instead of 20msec.

Do how does aRts fit in this scheme ?
I don't know currently , perhaps we will find a solution to integrate the
audio "minikernel" with aRts.

For now I will try to lay out a simple very lightweight protocol/kernel , and
general requirements/restrictions for the apps running withing this kernel.
(eg how to use lock-free FIFOs in a right way, how to avoid mutexes,
when it is convenient to use shared mem etc)

For example the audio app (=plugin) will not open the audio devices which
are managed by the audio-kernel. The apps instead will get pointers to
where to read or write the current audio fragmentdata, or where to get/put
the MIDI data etc.
With this approach you can for example sample from an audio input and
have 5 parallel running audio apps getting all the same data, perfectly
in sync without overhead when dispatching the audio data to all apps,
since the memory is all shared)

One drawback is that if one plugin crashes , then the whole audio
apps will be compromized. (eg if cubase crashes , disksampler will exit/crash
too).
WE can live with this because the crash of an application will compromize
our audio work anyway (what if you are mastering your song, and the
sampler crashes (eg which provides the drumset) , but a 2nd softsynth is still
alive ? Will the song be usable ? No, thus the above problems are only solvable
by writing correct/bug-free applications.

Do you all agree ?

>
> But even in Benno's approach one has to be sure that it is kept as
> simple as possible. For example, we already debated about byte streams
> in the core engine --- not all wanted them, but they are the only way
> to make the engine as general as possible.

Which byte streams do you mean ?
The one inside ALSA or withing LADSPA (my multiple datatype LADSPA which
allows streams of arbitrary datatypes)

Benno.

>
> Regards,
>
> Juhana


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Sun Jul 16 2000 - 16:55:39 EEST