Re: [linux-audio-dev] Re: Plug-in API progress?

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] Re: Plug-in API progress?
From: Paul Barton-Davis (pbd_AT_Op.Net)
Date: to syys   23 1999 - 13:33:57 EDT


>> >Two problems: Synchronization of the GUI and processing code, and
>> >network transparency.
>>
>> Network transparency is an illusory goal (for audio) in my
>> opinion. However, even if you want to pursue a more limited form of
>> it, note that by using X and simply forcing the plugin's cycles to run
>> on the same host as the engine, you do have a certain polarity of
>> network transparency.
>
>What I have in mind here is using one machine to run the client application
>and another machine, or possibly a cluster to do the processing. Distributing
>GUI code all over the place gets a bit too messy for my taste... And it

Thats not what I was suggesting. We need to get some clear terminology
here, because multithreading and X creates an additional layer of
(potential) complexity not present in the VST/TDM/EASI worlds. I
humbly suggest:

* master

     - an application that runs the [engine] and [plugins]

* engine
     
     - 1 or more threads running as part of the [master] that handles
       the actual numerical processing done to input and output
       signals, along with scheduling etc.

* ui-thread

     - 1 thread (because X is still not really multi-threaded) that
       (in most cases) interacts with an X server, and is responsible
       for (1) altering the UI to reflect information received from
       the engine (2) sending requests to the engine that reflect
       information from the user interface.

* plugin
  
     - a program in some (unspecified) language that contains
       (1) a part that will be run by the engine and (2) a part that
       will be run by the ui-thread

* display
  
     - the screen where the user interface can be seen. Under X, this
       may not be attached to the same machine as the one running the
       [master]

Using this terminology, what I was suggesting was that the ui-thread
{c,w,sh}ould run on the same machine as the engine, but that because
of X, this didn't prevent you from the display being on some other
machine.

BTW, I don't think we're ready to consider clusters yet. The latency,
by which I mean the delay in sending a message onto a previously empty
wire, is too great for audio synchronization.

>requires the GUI code to actually work on the same system as the DSP code -
>while you most probably don't want to load Qt, GTK+ or whatever on every node
>of a Beowulf. And what if you decide to build a cluster of Alpha or PPC boxes?

GTK is ported to these platforms. But thats beside the point. The UI
code *has* to run on the same host if you want low-latency interaction
between the interface and the engine. If you don't care about that,
then you have to devise a network IPC protocol to relay changes in the
UI to the master. This seems silly.

>> As for the sync problem - I don't believe there is a sync
>> problem. This is precisely what Quasimodo does now: the UI runs
>> in a different thread than the DSP emulation, and it fiddles randomnly
>> with the data used by the DSP; there is *no* synchronization between
>> them at all. I have had no problems with this, and I cannot find a
>> theoretical reason why there should be, because of the atomicity of
>> the instructions to store and load floats and integers (which is what
>> a parameter change fundamentally reduces to). So I think this is a
>> non-issue too.
>
>Not if you plan to support more complex data types than integers and floats.
>Which is the case with the new plug-in API...

Sorry, but I don't think so. When you add two numbers, you are adding
floats or integers. If you are proposing something of the form:

       res->a = src->a + src->b;

I again suggest that you read the opcode source to Csound, even though
this can be a painful experience. This is not how you code realtime
DSP functions. This is also why Csound makes a (permeable) distinction
between variables that can change at different rates. In Csound, if a
variable is only ever set once per instantiation of the, errr, plugin
:), then you do this:

              struct my_closure : struct basic_closure {
                     float *a;
                     float *b;

                     float _a;
                     float _b;
              };

and in the initialization stage of the plugin, you do this:

             closure->_a = *closure->a;
             closure->_b = *closure->b;

If the variable can change more rapidly than that, then certainly
there is a potential problem with:

        res->a = src->a + src->b

if the UI can twiddle with src->a and src->b directly. But the
twiddling is still atomic, and besides, there is no computer-based UI
that will allow you to alter two things simultaneously. So in reality,
a or b alway change independently.

the only case i can imagine where this is not true is when a single
on-screen control element actually affects two internal variables, but
thats just bad design of the plugin, IMHO.

>And, what about timing? How do you handle sample accuracy without going to
>single sample buffers? Events handle that in a clean and low cost way.

No they don't. You can't do sample accuracy without reducing the size
of the engine control cycle (the number of samples it generates
between checking for a new event in some way) to 1.

>kind of programming on the Amiga. Fun for a while, but then you start to
>realize that a faster CPU is always more flexible. At least with an RTOS...)

and with Ingo on your side, you realize than a faster non-RTOS is always more
flexible, too :)

>> yes, this is certainly a nice feature, but I am afraid that its too
>> slow. if changing the values of a parameter involves any more than
>> just a fairly low-cost function call (and particularly if it involves
>> a context switch between the UI thread and the engine thread), I have
>> some real concerns (based on experiences with Quasimodo) that you
>> can't do it fast enough.
>
>It doesn't even involve a function call per event. Events are written to Event
>Ports using inline code and dynamic memory allocation is done from heaps with
>limited life time; also using inline code. The only context switches/sync
>points are the global ones for client/engine communication.

this makes no sense to me. lets say i want to change the delay time on
a delay line. i tweak a virtual on-screen knob to do this. what does
the ui-thread do to cause the delay time to change ? how does the
engine know that it has happened, if indeed it has to (in Quasimodo,
it has no idea - the value at a certain memory location just gets
altered). this seems fundamental to me, and i simply don't understand
from what you've written so far how the event system can do this - it
seems based (both in terms of its name and the API) on a polling
system. This is not going to work, or at least, it can't do the things
I want Quasimodo to do. There is no polling in Quasimodo: the ui
thread has direct access to the memory containing the variables used
by the DSP thread. There is no communication between them, and the DSP
code does not "poll"/look-at/check any memory locations to see if
anything has been changed, ever.

>Shouldn't be a problem. Also note that the events can be handled with
>sample accuracy no matter what buffer size you use. If the MIDI
>interface plug-in can extract exact timing info from the MIDI data,
>that can be used to timestamp the events, so that plug-ins may use
>the information if desired.

i don't know what you're thinking of here. MIDI interfaces don't do
timestamping. the MTC messages are of way too coarse a resolution for
timing purposes, though they could be used for certain kinds of sync.


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:27:12 EST