Re: my take on the "virtual studio" (monolith vs plugins) ... was Re: [linux-audio-dev] ardour, LADSPA, a marriage

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: my take on the "virtual studio" (monolith vs plugins) ... was Re: [linux-audio-dev] ardour, LADSPA, a marriage
From: Benno Senoner (sbenno_AT_gardena.net)
Date: Fri Nov 17 2000 - 17:05:30 EET


Paul BD : can you comment please on the last part of this mail :
( hdr_callback() ) if such a model would be viable to implement your
record-while-play with instant punch-in/out ?

On Fri, 17 Nov 2000, David Olofson wrote:
> On Thu, 16 Nov 2000, Benno Senoner wrote:
>
[...]
>
> > Plus the system needs to provide more thatn one datatype since float is
> > not going to solve all our problems.
>
> Right; I have a few models for this by now, ranging from very, very
> basic designs, such as enumerating a fixed set of API built-in
> datatypes, nearly all the way to IDL.
>
> I think I'll stop slightly above the fixed datatype set variant;
> there will be a bunch of built-in datatypes as of the 1.0 API, but
> hosts should be able to use translator plugins or libraries to deal
> with newer or custom types. (Note: No need to understand types as
> long as you're connecting ports with the same type ID - just move
> the raw data.)

I propose to use my dynamic datatype stuff I implemented some time
ago.
Infinite number of datatypes, plus host does not need to know how
to interpret the data, as long as it finds plugins where
output_datatype of plug A matches input_datatype of plug B.
(of course plugs are required to support at least one of the
"builtin" datatypes (eg. float,double,int16,int32) otherwise
the system will not be usable)

>
> I'm *not* going to include low level binary format info in the data
> type descriptors, as it's simply not worth the effort. I can't see
> plugins adding that many new data types that hosts will ever need to
> worry about translating, so it's not worth the incredible added
> complexity.

Agreed: eventual translation is up to translations plugin. (if any)
(all nicely demonstrated in my dynamic-type-LADSPA experiments).

>>
> This ain't easy, I tell ya'... :-) Anyway, I'm open to further
> discussion. I have a design that I think is quite sensible more or
> less designed, and 50% coded, but I'm open to ideas. (Of course, the
> point with the first release is to take out the chainsaws and axes
> and bash away - this is far too complex to be solved at the first
> attempt at an implementation.

Perhaps me and David should keep iterating (discussing privately)
until we have some design to show up ? (at least in form of a diagram, detailed
 description or working code).

Or should we keep this public ?
But I warn you that this could involve months of discussion,
(perhaps a bit boring for some of you) and N models thrown away and
rebuilt from scratch.

This discussion is somewhat related to LADSPA but as said the goal of the
API is not meant as a replacement of existing plugin APIs.
It is a sort of managing/routing API.

[ ... SMP ... ]

> I'm not talking about splitting the net in partial chains that
> are connected end-to-end, but running the same net on all CPUs,
> interleaved.

I disagree about the efficiency of this model:
ok, it has the advantage of keeping all CPUs busy, but the
performance hit may be big (cache etc) , plus as you mentioned, clusters
are almost impossible to run using this model.

I prefer this method of load balancing:

you have M plugins and distribute them among N CPUs.

M is usually much higher than N.
You will say that there are plugs which are very time consuming,
while others are very lightweight.
But this issue can be solved quite easily.
Give each plugin a CPU usage index.
(similar to the Mhz numbers Steve posted for his plugs)
That way we can distribute the load among CPUs quite optimally.

The fun starts when there are multiple applications running on a SMP
box: each "application" (= a shared lib), would have to distribute their
private plugs / processing algorithms in a way that the global CPU load
is distributed evenly.

It's not an impossible task, but it requires careful design.
(in practice we need a call where the app asks
app: "hey soundserver: I want to add a plug, on which CPU should it run on ?"
soundserver "hmm .. let me look: this plus uses X Mhz and CPU 3 is only lightly
 loaded, so use CPU 3"

This is a simplification, since I'm still unsure if use a two level model, or
place plugins an apps on the same pot:

On the other hand using the single level model would have the advantage
that applications do not need to implement a LADSPA host over and over again.

eg: Ardour needs to run a tree of plugins on some of its tracks:
it could build the plugin-net using functions supplied by the server and then
with a single callback, the server would process all the data.
That way it would become much easier to distribute the load.

What I want to avoid is that apps like Ardour would be forced use a total
different model in order to use the "virtual studio" API.

But yes, I begin liking the idea of this "delegate the processing of LADSPA
chains to the server" model.

One of my concern is that besides from LADSPA, DAW applications may have to
perform other CPU intensive stuff (which cannot be covered by LADSPA, and
perhaps needs additional datatypes) and here we will need an additional API.
(MAIA etc etc)
But this will not hinder anyone, since all stuff that can be implemented in
LADSPA can be done here, without the need to learn a new API and/or to
change your application in order to support it)

( ... clusters ... )

> nodes, non-constant data flying around! Also note that some plugins
> will have huge problems running in this kind of environment...)

Exactly, therefore I propose to avoid this model, since the advantages are
only marginal (if at all).
We are still assuming number of plugins much greater than number of
processing units (CPUs or networked machines)

>
> Well, we are not talking sub 5 ms for a cluster for yet a while, but
> it should beat the current MacOS and Windows software solutions. (And
> eat them alive when it comes to processing power of course, but
> that's of little value if the latency is unworkable.)

But I am pretty confident that 10msec clusters are pratical on 100Mbit
LANs (switched and dedicated to the audio cluster, without 1000 people running
remote X11 sessions in the background).
And that should cause some ohhs and ahhs among Win/Mac folks.

>
> I'm more into this "a plugin can be a net" thing... How about the
> soundserver being the top-level host, while *all* applications
> actually are plugins? Any points in their nets that the application
> wolud like to connect to other applications, would be published in
> the form of ports on that application's plugin interface. The
> soundserver than connects ports just like any other plugin host.

Yes this sounds interesting, but as said we will need an API which
lets build the application the plugin-net it desires, which is then
effectively run by the soundserver (so that it can distribute the load among
CPUs or even among nodes of a cluster (you may think that a cluster is much
more complex than a SMP box, but as long as you minimize intercommunication
(which is wanted anyway, even on shared-mem architectures), the handling is
quite similar).

And of course the application can publish the "connectors" it desires.
(internal input/outputs or the ins and out of "private" plugins)

for example a simple HD recording app would look as follows:
(assume the soundserver calls this callback once per fragment

hdr_callback() {

 - fetch_soundcard_input_buffers() ( basically it consists of reading buffer
  where the soundserver placed the current fragment(s) (for each input
  channel))

- fetch_input_buffers_from_disk_thread() (this is needed if we need to process

- execute_LADSPA_process_chain()

- send resulting data to soundcard outputs & disk thread (for tracks which
 are being recorded)

}

I'm not sure if my model misses something, but I'd like to have the confirmation
that it can accomodate stuff like Ardour's punch-in/out stuff.
(Paul described how it works some time ago, and it is far from trivial, since
you need to buffer stuff and overwrite tracks on the right place, in order
to compensate the disk buffering issues).

Anyway if we can develop such a model where processing / mixing / routing
can be delegated to the virtual studio API, then
HDR apps become nothing more than a disk streaming engine, a GUI
and a bunch of plugins which do the processing.
(and as said the mixer plugin can be reusers over and over again,
(long live to the code reusal ! ).

And as I mentioned there are ZERO context switches in this model,
making its scalability and latency excellent.

PS: like LADSPA nets, there will be event-nets, which will be handled
in a similar way.
And of course there will be event-processing plugins, which will allow cool
things like connecting the MIDI sequencer to a remapping plugin which
transforms pitchbend messages into NRPNs to drive the LP filter of a
standalone synth. (the possibilities are endless)
 
waiting for more food of thought ....

Benno.


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Fri Nov 17 2000 - 17:12:17 EET