Re: [linux-audio-dev] gerk

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] gerk
From: Benno Senoner (sbenno_AT_gardena.net)
Date: Thu Jul 13 2000 - 19:20:52 EEST


On Thu, 13 Jul 2000, Joe Miklojcik wrote:
> I forsee a problem. What if I want to run Quasimodo, SuperCollider (is
> that ported yet?) and Evo simultaneously? How do I multiplex the
> control data stream to all these different processes? How to they
> resolve contentions for the audio output devices? What if one
> LADSPA plug-in wants to do stuff to the outputs of more than one
> synthesis system -- for example, ring modulate with a carrier from
> Quasimodo and a modulator from Evo?

yes this is a nasty issue, and since we want all our synths/samplers
playable in realtime ( 2-3msec latencies ),
running several applications simultanously may not deliver the
desired performance (apart that the audio output device has to be shared
as well ,and this requires an userspace daemon as well, or support in ALL
kernel drivers (which is a big mess).

So the way to go is to have a sort of low-latency soundserver which
schedules our applications manually.
One way to do would be to release the application (only the engine part) as a
loadable module which gets then executed by the soundserver.

For example the disksampler requires 3 threads:
audiothread (highest pri), midi thread (lower pri) and disk thread (lowest pri)

Many of the audio programs can decomposed in the above work areas.

(even a hypotetical Cubase VST fits nicely in the above model)

So assume we run Evo and Cubase:

to save some typing I will introduce:
AT= audio thread , MT = midi thread , DT= disk thread

The soundserver will fire up 3 threads for handling AT,MT and DT.

Assume you run Evo alone:

at each iteration the:
audiothread calls EvoAT() ( which returns after one audio fragment got
processed)
diskthread calls EvoDT()
midithread calls EvoMT()

This is basically the same as my program does, except that the soundserver
loads the .so module and calls the 3 procedures at each iteration. The
overhead is basically zero. (one function call more)

Now we start up Cubase and the Cubase callbacks are added to the soundserver

now at each iteration the:
audiothread calls EvoAT(); CubaseAT();
diskthread calls EvoDT() ; CubaseDT();
midithread calls EvoMT(); CubaseMT();

As soon as the CPU usage does not go over 100% (the audio thread is the most
sensible one), both apps will run perfectly in parallel and PERFECTLY SAMPLE
ACCURATE without any scheduling overhead.
(and the audio device sharing problem is solved as well since only the
soundserver access it)

In my opinion this is the way to go, since multithreading can become VERY heavy,
especially when we have 700usec fragment cycles (like in the disksampler case).

what do you think ?

I do not know anything about ReWire for Cubase (which allows you to wire
together the audio outs/ins of several audio apps running concurrently),
but I suspect that they use a similar strategy, because the multitheading on
windows is even more sucky.
Does someone know how ReWire work ?

>
> Oh well. Food for thought anyway.

Burp
:-)

I already thought about this issue, but the implementation of the disksampler
made this task decomposition even more clear.

Benno.


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Thu Jul 13 2000 - 19:13:51 EEST