Re: [linux-audio-dev] Re: my take on the "virtual studio" (monolith vs plugins) ... was Re: [linux-audio-dev] ardour, LADSPA, a marriage

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] Re: my take on the "virtual studio" (monolith vs plugins) ... was Re: [linux-audio-dev] ardour, LADSPA, a marriage
From: Benno Senoner (sbenno_AT_gardena.net)
Date: Tue Nov 21 2000 - 13:18:18 EET


On Tue, 21 Nov 2000, Paul Sladen wrote:
>
> Anything that /depends/ upon a persific language is fundementally flawed..
> if you have an API that ports easily across all languages, then you know
> that you have come up with a good answer... Why /shouldn't/ I be able to
> knock up a quick plug in Python etc...

Yes I agree.
But making the model compatible with non OOP languages makes it
a lot easier to create bindings for several programming languages.
Basically what we need is a protocol that has some OOP design in it,
but does not depend on OOP languages.
(is Gtk a good analogy here ? (I'm not familiar with gtk thus I may be wrong).

> >
> > Plus this delegating the establishement/destroying of connections to
> > a separate thread is not going to solve our problems.
> But what it does do is not disrupt the RT process when it /needs/ to be
> running. The server needs to be running nonRT, and the plugin chain, and
> only the plugin chain, RT.

This argument need more discussion, but I do not want to rule out the
possibility to make the connections in realtime (delayfree).
Of course stuff that gets too heavy needs to be moved in a lower priority
thread. (for example plugin instantiation/deinstantation must be performed
there, otherwise the RT audio stream will get disrupted).

>
> The only option that I've come up with is passing a "timeframe" parameter
> around too.... example:

(....)

Processing plugins on a SMP machine while achieving good scalability
AND low latency is not a trivial task ....
If you can demonstrate that your model scales well and does not require
pipelining (eg several added audio fragments to get rid of interdependencies),
then I'm all for it.

Plus I think that your proposal of the timeframe thing will let us end up
into the pipelining case and as said this may destroy the
"realtime" attribute.

Anyway what I were proposing is to fire up a
"master thread" and one worker_thread() per CPU.
There would be only one sync point to exchange data:
basically the master thread blocks on audio I/O and exchanges data with
the worker threads waking them up before (via a pipe() for example).
After collecting the data from all worker threads, it will probably have to
perform a few more operations like downmixing , applying some common FX
 and route the audio to the desired audio outs / HDR tracks.
And this common processing (done within the master thread) is the part which
does not scale.
If this taks takes let's say 10% of CPU time , it will be lost on ALL CPUs.
But I think that this cannot be solved easily (if at all).

One consoling thing:
this (for example) 10% loss of CPU power is not lost in reality:
basically since all CPUs perform system-friendly waits (no busywaiting),
other threads which need the CPUs like disk I/O and GUI rendering
threads can utilize this time.
But of course we want our SMP DAW be capable to utilize at
least 70-80% of each CPU.

cheers,
Benno.

>


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Tue Nov 21 2000 - 12:32:03 EET