Re: my take on the "virtual studio" (monolith vs plugins) ... was Re: [linux-audio-dev] ardour, LADSPA, a marriage

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: my take on the "virtual studio" (monolith vs plugins) ... was Re: [linux-audio-dev] ardour, LADSPA, a marriage
From: Stefan Westerfeld (stefan_AT_space.twc.de)
Date: Sun Nov 19 2000 - 15:56:54 EET


   Hi!

On Fri, Nov 17, 2000 at 03:59:44PM -0500, Paul Barton-Davis wrote:
> >Well, there is nothing wrong with having a cool interface for "playing
> >things", as well as having a cool interface for "recording things to
> >disk". But they IMHO really shouldn't be mixed. It should look like
> >
> > +----------------+ +---------------+
> > | HDR System | | Softsynth |
> > +---v--v--v--v---+ +---v--v--v--v--+
> > | | | | | | | |
> > +---v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v---+
> > | multichannel play interface |
> > +----------------------------------------------------------+
> >
> >This could for instance be a snapshot where you use software synthesis life
> >on some channels of your multichannel play interface, while you playback
> >something from the HDR System. Later on, the user may choose to record
> >the softsynth stuff to the HDR system, so he can go on and connect the
> >outputs of the softsynth stuff to some of the inputs of the HDR system.
>
> No question. And this is precisely what I am now doing with ardour's
> internals. At the moment, it looks like this
>
> +----------------+
> | HDR System |
> +---v--v--v--v---+
> | | | |
> +---v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v---+
> | multichannel audio interface |
> +----------------------------------------------------------+
>
> the next incarnation will be:
>
> +----------------+
> | HDR System |
> +---v--v--v--v---+
> | | | |
> +---v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v---+
> | host/patchbay/router |
> +---v--v--v--v---------------------------------------------+
> | | | |
> +---v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v---+
> | multichannel audio interface |
> +----------------------------------------------------------+
>
> which will then allow:
>
> +----------------+ +---------------+
> | HDR System | | Softsynth |
> +---v--v--v--v---+ +---v--v--v--v--+
> | | | | | | | |
> +---v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v---+
> | host/patchbay/router |
> +----------------------------------------------------------+
> | | | | | | | |
> +---v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v---+
> | multichannel play interface |
> +----------------------------------------------------------+

I think there is still one important aspect of the aRts/MCOP stuff that
isn't clear in our discussion.

Suppose "HDRSystem", "Softsynth" and "MultiChannelPlay" would be MCOP
components, or composed of MCOP components. Then you wouldn't need a
host/patchbay/router object. The framework does routing of arbitary signal
flow graphs.

I.e.

input: a signal flow graph, consisting of names of components, connections,
       and values to be used (you can create such things with artsbuilder),
           screenshot here: http://www.arts-project.org/aol/pics/scr-0.3.1-3.jpg

output: a running network with the components connected in the right ways,
        which does periodically call calculateBlock functions on the
                objects in the flow graph

Now if you come with overhead discussions, there is a simple answer to this.
aRts/MCOP doesn't suffer the whole overhead of network transparent, format
independant, inter process synchronized .... stuff here, because it doesn't
use it here.

There is the distinction between
 * synchronous streaming
 * asynchronous streaming

where the former works inner-process, and makes the assumption that all
plugins run at the same speed and get synchronized (i.e. the network would
block for the result of a plugin), and asynchronous streaming, which assumes
that plugins may or may not produce data at any rate (more event-based in
thought).

What I am argueing for is, that rather than inventing the whole inventory
of "how to connect plugins together in a user controlled way" - which is
what you want to have in your "patchbay" object, rather to make the objects
expose which ports they have in a aRts/MCOP fashion, so that all the
algorithms for handling flow graphs - i.e. loading/saving them, connecting
plugins, performing their execution, and so on, do work the same because
the components are compatible.

Besides, there are complications if you allow cyclic flow graphs with delay
elements, which you better solve only once ;).

> >As for MVC. Think of it this way: the model IS the HDR System component,
> >or at least the model is one part of the HDR System component. The view
> >and controller then access the HDR System with various methods to achieve
> >whatever they want. That way, you can have different apps/components which
> >act views/controllers.
>
> The reason this doesn't work is that the data flows in multiple
> directions. the model is not *just* the HDR system. its also the
> router, the audio interface, a MIDI interface, and so on.
>
> I have had immense pains with my software dealing with the fact that
> we have at least 3 sources of control events in many of my programs:
>
> * MIDI port(s)
> * X input devices
> * audio interface state changes

I see the problem - I didn't up to now really intermix these in a very heavy
way, so I can't tell you much about whether Qt and/or MCOP are or are not
helpful in that situation.

> >In fact, it is the GLUE between the components, which you need to standarize
> >to make components interoperable, and aRts/MCOP provides IMHO good ways for
> >doing so. I mean IDL files, the MCOP protocol, the code generator, component
> >trading, self-description of components, etc.
>
> But Stefan, this is all total overkill and in some sense, too much
> overhead, for things in the audio processing chain.
>
> MCOP looks nice for communicating control parameters and events around
> the place, but it really seems to me to have nothing to do with
> the audio stream in a low latency system.

It does good for both, communicating streams of audio data, and communicating
parameters - you can run 200 connected MCOP audio plugins in one process, and
get useful output out of it, and CPU power for doing the communication is
still very little compared to the CPU power for doing the calculation, so
I think it is a suitable solution performancewise.

> Signal1<void,Audio *> input_ready;
> input_ready.connect (slot (plugin, &Plugin::run));
>
> results in:
>
> plugin->run (Audio *data);
>
> being called every time we do "input_ready()".

Right. Now I see why you were referring to libsigc++. You can do this,
certainly, and use sigc++ as replacement for a function pointer.

Basically, what we are talking about is two concepts:

 * the plugins implicitely schedule themselves, since they are emitting
   signals, which will lead to other plugins being called, which emit
   signals, ... and so on ... until in the end effect, the flow graph
   calculates itself periodically

 * the plugins are scheduled, i.e. a scheduler finds out which plugins
   should be called when, with which blocks of data, to achieve the
   execution of the signal flow graph

Your solution uses the sigc++ function-pointer abstraction to achieve the
first, while my solution uses the MCOP object model (which standarizes
how plugins declare that they have input/output streams) to achieve
the second.

IMHO the second has advantages, such as, you can change the way you want
to schedule plugins without changing the plugins, for instance, add feedback
driven scheduling in the form

       | +-------+
       | | |
       | V |
    [ filter ] |
       | |
    [ delay ] |
       | |
       +----------+
           |
           V

You can also control timing at sample level, i.e. you can split up blocks
automatically, if you see that an event should take place between them.

> >connect(guipoti,"value",freeverb,"roomsize");
>
> class This {
> ...
> void adjust_roomsize (Value);
> };
>
> some_other_code () {
>
> Gtk::Adjustment adjuster (some parameters);
> Gtkmmext::MotionFeedback knob (adjuster, knob_pixmaps);
>
> adjuster.value_changed.connect (slot (that, &This::adjust_roomsize));
> }
>
> whats the problem ?

None. But correct me when I am wrong with a guess here: I think you always
assume that

1. you write all components
2. you compile the thing
3. you run it

Then of course, connecting things like that (getting a function pointer)
will work. But I am assuming

1. you write some components
2. you compile the thing
3. you run it

4. you write some more components
5. you compile only these
6. you use them (either in the already running app, or restarting the app)

This way, the app built by 1./2./3. doesn't know about the steps 4./5./6.
and so can't make connections by calling

   adjuster.value_changed.connect (slot (that, &This::adjust_roomsize));

whereas it could by calling

   connect(adjuster,"value_changed", that, "adjust_roomsize");

given that

a) you provide self-description (i.e. the app can list available slots)
b) they share a common base (Arts::Object or Arts::SynthModule int that case)

That is most of why you need a meta-object-model - to deal with components
that were not known at compile-time. I think you can build on "on top" of
sigc++, but you get none if you use plain sigc++.

> >So if any Gtk programmer wants to join in ;) - I think using such a component
> ^^^
> Gtkmm, surely. The GTK+ programmers would be running from
> C++ a fast as possible :)
>
> >layer really SOLVES the problem with toolkit indepenadance of audio processing
> >plugins. LADSPA is simply not up to the task of doing so, but with a
> >middleware like MCOP it becomes really easy. And you don't care that your GUI
> >is running in one process/thread while your signalprocessing is running in
> >another, because the transport is done automagically and transparent.
>
> These things don't seem connected to me. LADSPA has to do with
> connecting audio flow within a single execution context. MCOP has to
> do with moving data around in a transport-independent way. If you make
> the concession that low-latency audio apps don't want transport
> independence for audio (please, please make that concession :),

Yes, I do (thats what aRts/MCOP synchronous streaming is for).

> than
> LADSPA is fine for that purpose. OK, then we need some other mechanism
> for distributing non-audio data. I don't care much about this, not
> because its not important, but precisely because whatever the method,
> I am pretty sure it will be transport independent.

I think the most important part that aRts/MCOP guarantees is that it is
reasonable complete and consistent. It doesn't, like LADSPA, sigc++ or
whatever else cover *one aspect* which you might use to eventually, with
other stuff, get a "virtual studio" which is component based.

It rather covers *every aspect* which you need to get a "virtual studio".
This is why aRts/MCOP cares about getting parameter data passed, building
GUIs, describing flow graphs, providing network transparency and self-
description, maintaining toolkit independance, ... and so on.

Today somebody sent me a preview of a fruity loops clone on top of aRts/MCOP,
there is an experimental tracker (artstracker), a sequencer (Brahms), some
KDE media players (noatun/kaiman), a sound server (artsd), and so on.

The idea of standarizing glue is that all these apps use a consistent base
which makes it very easy for them to be interoperable.

And, I currently simply don't see a technical reason why the framework
shouldn't be good enough for use with HDR like components.

   Cu... Stefan

-- 
  -* Stefan Westerfeld, stefan_AT_space.twc.de (PGP!), Hamburg/Germany
     KDE Developer, project infos at http://space.twc.de/~stefan/kde *-         


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Sun Nov 19 2000 - 16:37:25 EET