Re: my take on the "virtual studio" (monolith vs plugins) ... was Re: [linux-audio-dev] ardour, LADSPA, a marriage

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: my take on the "virtual studio" (monolith vs plugins) ... was Re: [linux-audio-dev] ardour, LADSPA, a marriage
From: Paul Barton-Davis (pbd_AT_Op.Net)
Date: Fri Nov 17 2000 - 22:59:44 EET


>Well, there is nothing wrong with having a cool interface for "playing
>things", as well as having a cool interface for "recording things to
>disk". But they IMHO really shouldn't be mixed. It should look like
>
> +----------------+ +---------------+
> | HDR System | | Softsynth |
> +---v--v--v--v---+ +---v--v--v--v--+
> | | | | | | | |
> +---v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v---+
> | multichannel play interface |
> +----------------------------------------------------------+
>
>This could for instance be a snapshot where you use software synthesis life
>on some channels of your multichannel play interface, while you playback
>something from the HDR System. Later on, the user may choose to record
>the softsynth stuff to the HDR system, so he can go on and connect the
>outputs of the softsynth stuff to some of the inputs of the HDR system.

No question. And this is precisely what I am now doing with ardour's
internals. At the moment, it looks like this

   +----------------+
   | HDR System |
   +---v--v--v--v---+
       | | | |
   +---v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v---+
   | multichannel audio interface |
   +----------------------------------------------------------+

the next incarnation will be:

   +----------------+
   | HDR System |
   +---v--v--v--v---+
       | | | |
   +---v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v---+
   | host/patchbay/router |
   +---v--v--v--v---------------------------------------------+
       | | | |
   +---v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v---+
   | multichannel audio interface |
   +----------------------------------------------------------+

which will then allow:

   +----------------+ +---------------+
   | HDR System | | Softsynth |
   +---v--v--v--v---+ +---v--v--v--v--+
       | | | | | | | |
   +---v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v---+
   | host/patchbay/router |
   +----------------------------------------------------------+
       | | | | | | | |
   +---v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v---+
   | multichannel play interface |
   +----------------------------------------------------------+

>About meters, well, if your hardware can get meters, that is nice. If not,
>then you'll need to read data from the soundcard while you get the meters,
>and not record. That is a good argument for seperating the metering code
>in own components, not to include them in a more complex HDR System.

Already the case. But the point is, to be useful, this has to be
within either the multichannel audio interface object, or the
router. it can't be off to the side, because that requires plugins to
know about one another, and i consider that a bad idea.

Furthermore, suppose nothing being played or recorded. There is no
reason to do any sample format conversion just to get input meter
values. Ardour's multichannel object reads the sample values directly
from the DMA buffer of the audio h/w, and as such is h/w dependent
because it has to know the sample format and sample layout in use. If
we try to move the input metering out into a plugin (say), we now have
to convert the data into a standardized form and pass it to the plugin.

Again, its an optimization, but hey, I'm an optimistic kind of guy :)

>As for MVC. Think of it this way: the model IS the HDR System component,
>or at least the model is one part of the HDR System component. The view
>and controller then access the HDR System with various methods to achieve
>whatever they want. That way, you can have different apps/components which
>act views/controllers.

The reason this doesn't work is that the data flows in multiple
directions. the model is not *just* the HDR system. its also the
router, the audio interface, a MIDI interface, and so on.

I have had immense pains with my software dealing with the fact that
we have at least 3 sources of control events in many of my programs:

   * MIDI port(s)
   * X input devices
   * audio interface state changes

I don't know about Qt, its perhaps better than Gtk--, but it
definitely isn't very MVC friendly, and handling this kind of thing
gets tricky. For example, there is no easy way to set a toggle
button's state without it appearing that the user actually clicked
it (i.e. the usual callback for "user clicked the toggle button" gets
invoked). this gets ugly really fast. its partly the fault of the
Gtk--/GTK, and its partly that until now these kinds of programs were
rare. Its forced me away from using standard widgets in many cases,
and having to wrap them up in many others.

>In fact, it is the GLUE between the components, which you need to standarize
>to make components interoperable, and aRts/MCOP provides IMHO good ways for
>doing so. I mean IDL files, the MCOP protocol, the code generator, component
>trading, self-description of components, etc.

But Stefan, this is all total overkill and in some sense, too much
overhead, for things in the audio processing chain.

MCOP looks nice for communicating control parameters and events around
the place, but it really seems to me to have nothing to do with
the audio stream in a low latency system.

>> I would be interested in Stefan's thoughts on the comparison of MCOP
>> and libsigc++.
>
>Libsigc++ doesn't standarize the other parts of the object interface, like
>methods, attributes, instantiation, inheritance.

intentional, of course.

>and get a list of all available slots on that object. libsigc++ doesn't
>interact with the write-c++-code, compile, link, execute cycle, it does
>it's work solely by employing templates.

this is not really true. just as you change the definitions in an IDL,
you can change the declarations of the signals that a given object
has. then you have to recompile, link, execute etc.

>On the other hand, a CORBA idl compiler, the Qt meta-object-compiler and
>the MCOP idl compiler all work in the same way:

true - they all add an extra step to the compilation process, rather
than using the power of templates :) the point here is that we are
harnessing the power of the C++ compiler to do a lot of that for
us. we don't need external compiler systems because the signal system
is integrated into C++ itself.

> * use code-generation to add missing features, for MCOP that is
> - self-description (i.e. what methods has component foo, what streams,
> what attributes, what does it inherit)

true.

> - binding it to a signal flow system

you can use sigc++ for this too, but the underlying run-time behaviour
is a little different. in fact, in sigc++, its pretty much like
LADSPA. eg.

        Signal1<void,Audio *> input_ready;
        input_ready.connect (slot (plugin, &Plugin::run));

results in:

        plugin->run (Audio *data);

being called every time we do "input_ready()".

> - provide network-transparency/IPC

this has been/is being added to libsigc++ as we speak. but its being
done without CORBA, or extra compilers, just with run-time libraries.
         
>In short, libsigc++ and MCOP they are currently almost perfectly orthogonal.

Well, thats why I asked. I don't actually see them as being orthogonal
at all. If you use a version of libsigc++ with thread+network
transparency, you can do the plugin example above, and not realize
that

     plugin->run (data)

is actually happening 10,000km away, or in another thread.

>On the other hand, I am right now working on building a signals & slots
>system into MCOP, too (not libsigc++).

Why *not* libsigc++ ?
                                        While connecting streams of data
>works, (i.e. streams of audio data, midi data and so on), whats really
>missing right now is connecting property based stuff (also network
>transparent again), like
>
>connect(guipoti,"value",freeverb,"roomsize");

    class This {
        ...
        void adjust_roomsize (Value);
    };

    some_other_code () {

         Gtk::Adjustment adjuster (some parameters);
         Gtkmmext::MotionFeedback knob (adjuster, knob_pixmaps);

         adjuster.value_changed.connect (slot (that, &This::adjust_roomsize));
    }

whats the problem ?

>So if any Gtk programmer wants to join in ;) - I think using such a component
           ^^^
           Gtkmm, surely. The GTK+ programmers would be running from
           C++ a fast as possible :)

>layer really SOLVES the problem with toolkit indepenadance of audio processing
>plugins. LADSPA is simply not up to the task of doing so, but with a
>middleware like MCOP it becomes really easy. And you don't care that your GUI
>is running in one process/thread while your signalprocessing is running in
>another, because the transport is done automagically and transparent.

These things don't seem connected to me. LADSPA has to do with
connecting audio flow within a single execution context. MCOP has to
do with moving data around in a transport-independent way. If you make
the concession that low-latency audio apps don't want transport
independence for audio (please, please make that concession :), than
LADSPA is fine for that purpose. OK, then we need some other mechanism
for distributing non-audio data. I don't care much about this, not
because its not important, but precisely because whatever the method,
I am pretty sure it will be transport independent. MCOP, libsigc++,
MAIA (GREAT NAME, David O.!) - its really not that important to
me. What I really care about is tightly controlling the audio flow.

Note that I *do* understand the broader ramifications of MCOP. I think
that people wanting do things in the performance world could use it
with a network of systems for some very interesting effects. Audio
rendering could use it very effectively too. I just happened to be
*very* focused on applications that are replacements and extensions of
things done now by dedicated hardware. In such systems even the
relatively low overhead of the function generated by the MCOP compiler
for same-thread execution strikes me (without knowing too much about
it) as more than I want to deal with.

I could be wrong. And I mean that!

--p


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Fri Nov 17 2000 - 23:40:07 EET