Re: [linux-audio-dev] Re: Plug-in API progress?

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] Re: Plug-in API progress?
From: David Olofson (audiality_AT_swipnet.se)
Date: to syys   23 1999 - 16:31:21 EDT


On Thu, 23 Sep 1999, Benno Senoner wrote:
[...]
> > >2) Native GUI built into the plug-in. (100% Platform dependent...)
> >
> > How does this work ? What is handling GUI event callbacks ? Is the
> > plugin a separate thread or task ? Or is it all running as part of the
> > engine ?
>
> I'd prefer to separate the things , or you will end up with low-latency
> problems.
> A GUI hasn't to be very realtime therefore, run the GUI thread just
> with normal priority.

Not only that: With such a design (apart from it being plain wrong according
to many schools of design), you *can't* port your plug-ins to dedicated DSP
systems, nor some kinds of RT kernel based engines, as you can't execute, or
even load+link GUI code in the same environment as the DSP code runs in.
(Personally, I don't even think GUI and processing code should be allowed to be
in the same address space...)

> > Network transparency is an illusory goal (for audio) in my
> > opinion. However, even if you want to pursue a more limited form of
> > it, note that by using X and simply forcing the plugin's cycles to run
> > on the same host as the engine, you do have a certain polarity of
> > network transparency.
>
> Yea, it would be nice to be able to walk around the studio,
> and have a way to fire up your mixer GUI on your preferred workstation,
> while the audio engine still runs on the BIG-BOSS machine
> ( 4way K7 @ 800Mhz :-) ).

And I certainly DON'T want to listen to the cooling compressors and fans of a
machine like that, so remote GUI is not only cool, but *required* for any
serious high end applications. Preferably, the X terminal should have a flash
disk (if any disk at all), and no fans. The Big Cool Box should be in the next
room.

> But for low-latency distributed processing you are rights,
> the network latency is too high, which makes Quasimodo on beowulf
> not very flexible.

The latency is quite low with decent NICs and real time capable drivers and
protocols. It's probably not good enough to get below the 3 ms limit now, but
in a big hard disk recording system, up to 50 ms latency for track insert
effects isn't all that bad. The real low latency stuff and final master mixing
should be done on the machine with the audio cards, of course.

The point: 50 ms latency is a lot better than having to resort to off-line
processing. True, the bus on an SMP system is faster than ethernet with RT
drivers, but can anyone tell me how to build a 16-way Celeron 500...? ;-)

[...]
> IMHO the GUI thread should queue up incoming events (parameter changes),
> and it the event stream gets too dense, just thin out the stream or evaluate
> only the most recent events.
> It makes no sense to me to send 1000parameter changes/sec to a GUI fader
> while the graphics card refresh rate is only 70Hz or so.
> I know that you can't ignore all events like volume fader changes.
> But with a smart system even when there will be dense event streams,
> you can keep the GUI still snappy.

A classic and simple solution is to run the GUI's event parsing until the
buffers are empty, and then update the display. That way, you can't overload
the GUI, but will still reach the maximum refresh rate the video system can cope
with. (Somewhat like 3D games; the frame rate is always determined by the CPU
and video processor power, while the logic engine runs at a fixed time base in
the background, or calculates animations for the ammount of time elapsed since
it was last invoked, in the case of a single thread engine.)

[...]
> > I am currently investigating the XRecord extension for automation of
> > Quasimodo. I don't see any need to reinvent the wheel here -
> > automation of X interfaces is something that has been worked on for 10
> > years or so. The X model is fairly nice because the server will buffer
> > your events for you until you are ready to handle them, to a
> > point. So, you can go through a huge burst of UI changes, and then
> > pick up the event stream for automation recording once things calm
> > down.
>
> Interesting, but IMHO the GUI elements should be passive , thant means,
> the audio engine's automation should drive the elements through sending events.

Agree. For timing precision, everything should be controlled from within the
engine, in order to avoid the nasty out-of-sync problems some synths with
DSP+MCU have. Ever tried programming fast, percussive sounds on that kind of
synths? Any analog feel should be *simulated*; not a random side effect of the
system design...

//David

 ·A·U·D·I·A·L·I·T·Y· P r o f e s s i o n a l L i n u x A u d i o
- - ------------------------------------------------------------- - -
    ·Rock Solid David Olofson:
    ·Low Latency www.angelfire.com/or/audiality ·Audio Hacker
    ·Plug-Ins audiality_AT_swipnet.se ·Linux Advocate
    ·Open Source ·Singer/Composer


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:27:12 EST