Re: my take on the "virtual studio" (monolith vs plugins) ... was Re: [linux-audio-dev] ardour, LADSPA, a marriage

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: my take on the "virtual studio" (monolith vs plugins) ... was Re: [linux-audio-dev] ardour, LADSPA, a marriage
From: Stefan Westerfeld (stefan_AT_space.twc.de)
Date: Fri Nov 17 2000 - 21:33:29 EET


   Hi!

On Fri, Nov 17, 2000 at 12:47:52PM -0500, Paul Barton-Davis wrote:
> >What is missing in LADSPA is besides network transparency and IPC the ability
> >to talk to components in a more complex way than streaming data around. The
> >HDR issue illustrates: you want to abstract the HDR engine from the
> >application that is using it. So an HDR engine should be a component like an
> >equalizer or a freeverb effect.
>
> The problem is that this kind of "abstraction" doesn't get close to
> the real complexity of interlocking:
>
> * an audio interface with hardware monitoring
> * an audio thread
> * a disk thread
> * a rich and powerful GUI that provides lots of feedback
> on state, and multiple ways to do things
> * Model-View-Controller design issues
>
> So no, I don't want to abstract the HDR engine from the application
> that is using it. Not for one minute. You might be able to build a
> simplistic kind of HDR system this way, but not something that can be
> used as a professional recording system because there will be too much
> decoupling of the various parts of the system. Or let me put it
> another way. While building ardour, i had to continually refine the
> set of interfaces i wanted my audio h/w interface class to present. To
> build Ardour "into" aRts, for example, aRts would have to be extended
> to provide things like:
>
> * sample clock sync status
> * toggling h/w monitoring
> * changing the digital sample clock source
> * getting meter values even though nothing is being played
> or recorded

Well, there is nothing wrong with having a cool interface for "playing
things", as well as having a cool interface for "recording things to
disk". But they IMHO really shouldn't be mixed. It should look like

   +----------------+ +---------------+
   | HDR System | | Softsynth |
   +---v--v--v--v---+ +---v--v--v--v--+
       | | | | | | | |
   +---v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v--v---+
   | multichannel play interface |
   +----------------------------------------------------------+

This could for instance be a snapshot where you use software synthesis life
on some channels of your multichannel play interface, while you playback
something from the HDR System. Later on, the user may choose to record
the softsynth stuff to the HDR system, so he can go on and connect the
outputs of the softsynth stuff to some of the inputs of the HDR system.

About meters, well, if your hardware can get meters, that is nice. If not,
then you'll need to read data from the soundcard while you get the meters,
and not record. That is a good argument for seperating the metering code
in own components, not to include them in a more complex HDR System.

While the HDR System would run partly in the audio thread (as in fact all
aRts components do), it could startup an own disk thread, and talk to that
directly (in fact, that is what I would expect from an HDR System, to take
away the burden of communicating intelligently with a disk thread, to get
the data nicely from the disk, and write them to the disk again).

As for MVC. Think of it this way: the model IS the HDR System component,
or at least the model is one part of the HDR System component. The view
and controller then access the HDR System with various methods to achieve
whatever they want. That way, you can have different apps/components which
act views/controllers.

Note that when I paint a HDR System box above, I do not think the GUI
in that box, only the signal processing part. The GUI is seperate and
only USES the box via some interfaces.

I don't see the point why this should not work. I see the point however,
that for this to work nicely, you'll need to split up applications which
were designed monolithic to a much more component based design.

In fact, it is the GLUE between the components, which you need to standarize
to make components interoperable, and aRts/MCOP provides IMHO good ways for
doing so. I mean IDL files, the MCOP protocol, the code generator, component
trading, self-description of components, etc.

> I would be interested in Stefan's thoughts on the comparison of MCOP
> and libsigc++.

Well, libsigc++ is a signals & slots system, which is designed along the
line of Qt signals and slots. It provides you with sophisticated versions
of the idea

   connect(playbutton,"clicked",mp3player,"play");

Note that a SIGNAL is here not an audio signal (i.e. continous data), but
rather a kind of event, that may have some parameters, like the "this
button was clicked now" or "the active view has changed to view 1".

Libsigc++ doesn't standarize the other parts of the object interface, like
methods, attributes, instantiation, inheritance.

libsigc++ also doesn't provide self-description, i.e. you can't do something
like

   listAllSlots(playbutton);

and get a list of all available slots on that object. libsigc++ doesn't
interact with the write-c++-code, compile, link, execute cycle, it does
it's work solely by employing templates.

On the other hand, a CORBA idl compiler, the Qt meta-object-compiler and
the MCOP idl compiler all work in the same way:

 * read a file which describes the interface, attributes, streams,
   signals&slots, inheritance and other properties of an object
   (in CORBA/MCOP that is a .idl file, in Qt it's integrated in C++ headers
    through extra keywords)

 * use code-generation to add missing features, for MCOP that is
     - self-description (i.e. what methods has component foo, what streams,
           what attributes, what does it inherit)
         - binding it to a signal flow system

             i.e. connect(mp3player,freeverb);
                      connect(freeverb, soundcard);

       would connect "audio streams" between the components
     - provide network-transparency/IPC
         
            mp3player.startSong(); /* doesn't care whether mp3player is
                                           a local or remote object */

     - provide object-creation by name (by string)
         ok, create one, like
                 Arts::PlayObject p = Arts::SubClass("Arts::MP3PlayObject");

                 that would be equivalent like
                 p = new Arts::MP3PlayObject();
                 where you need to know the type at compile time, where above you
                 need to know the type at runtime

      - provide method invocations without knowing the interface before

         long res;
             DynamicRequest(calculator).method("add").param(2).param(4).invoke(res);

        useful for scripting, RAD-like apps (like artsbuilder), ...

      - for datatypes (like struct TimeStamp { long a; long b; }) ensure
            marshalling operations which go with the MCOP protocol

 * implement components in relatively standard C++, while you get the extra
   features merged in via the generated code (like in aRts, inheriting the
   _skel class)

In short, libsigc++ and MCOP they are currently almost perfectly orthogonal.
You might really want to use both ;) - sigc++ for signals & slots, MCOP
for scheduling signal flow (i.e. calling calculateBlock on the plugins,
following a signal flow graph), managing interapp communication and all
the other stuff I described above.

On the other hand, I am right now working on building a signals & slots
system into MCOP, too (not libsigc++). While connecting streams of data
works, (i.e. streams of audio data, midi data and so on), whats really
missing right now is connecting property based stuff (also network
transparent again), like

connect(guipoti,"value",freeverb,"roomsize");

When that is done, you might say that a signals & slots system is a small
part of MCOP. And MCOP will then not only be the right choice for components
in the audio processing and server side, but also for GUI components. In
fact, I think that using the MCOP component layer to provide a GUI abstraction
for both, KDE/Qt and Gtk components would a a really neat idea, and at least
for the KDE part, I have already some code for some components.

So if any Gtk programmer wants to join in ;) - I think using such a component
layer really SOLVES the problem with toolkit indepenadance of audio processing
plugins. LADSPA is simply not up to the task of doing so, but with a
middleware like MCOP it becomes really easy. And you don't care that your GUI
is running in one process/thread while your signalprocessing is running in
another, because the transport is done automagically and transparent.

   Cu... Stefan

-- 
  -* Stefan Westerfeld, stefan_AT_space.twc.de (PGP!), Hamburg/Germany
     KDE Developer, project infos at http://space.twc.de/~stefan/kde *-         


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Fri Nov 17 2000 - 22:22:09 EET