Re: [linux-audio-dev] Re: Plug-in API progress?

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] Re: Plug-in API progress?
From: David Olofson (audiality_AT_swipnet.se)
Date: la syys   25 1999 - 09:25:24 EDT


On Sat, 25 Sep 1999, Paul Barton-Davis wrote:
> >There's a problem with applications where threads will block waiting for data
> >to get through all, the time... That is, applications that do not map well to
> >parallel processing. But do audio engines really belong in that class?
>
> it depends. anything which may have to mixdown a series of output
> buffers and/or constantly do mutex between plugins to make sure
> they are not touching the output buffer at the same time *does* fit
> into this category.

The output buffer and the plug-ins are to be on the same CPU in that case.
That's why it's so important to structure the processing net correctly. And, as
opposed to an OS running normal tasks, an audio engine running plug-ins has
quite a lot of control here.

> >> Even on the KSR, which
> >> had a *much* faster inter-processor bus than ethernet, people doing
> >> heavy numeric processing that did not have this characteristic
> >> (i.e. there was a lot of read/write activity on the mythical "shared
> >> memory" that actually translated into invalidations of the local
> >> processor caches) found that their performance sucked. they had to
> >> switch to a NUMA model to get things to really fly, which was hardly
> >> the point of the KSR.
> >
> >Of course. Who would expect otherwise?
>
> KSR :)

Yes, obviously! :-)

> Not to mention dozens of researchers, and quite a few venture
> capitalists who got burned in the various ventures that tried to
> implement this kind of system.

Hmm... I thought it was pretty obvious that you'll get problems if your
applications depend on latencies that you can't cut down.

As a parallel; when you write applications for NT, you usually try to keep the
number of threads down, as NT's inefficient context switching is a known
problem. If you can't live with it, you'll do like quite a few have already
done: Switch to a real OS. :-)

> >Plug-ins are not executing in parallel...
>
> so much for closely-knit clusters, eh ?

Well, that wasn't really the point here. A few plug-ins *will* of course be
executing in parallel in such a system, as in SMP systems, but that still
doesn't necessarilly mean you run only one plug-in per CPU, using close to 100%
of the CPU time...

> >> the way that quasimodo (+supercollider +csound) would handle this is
> >> that the thread handling MIDI input would cause a callback to run. the
> >> callback would fiddle with the parameters of the plugin (without
> >> talking to the plugin, or queing anything up anywhere), and if the
> >> plugin is running, it will simply use the new value.
> >
> >...and that happens only a fraction of the times you get an event. And unless
> >you're only running one plug-in, that takes nearly 100% of the CPU power, the
> >resulting effect on the output will not have much to do with the actual timing
> >of the input event.
>
> i accept this observation. its a good one that i hadn't paid to much
> attention to. however, i didn't write things this way to improve the
> input event latency (which i once believed was bounded by the control
> cycle anyway), but to reduce the overhead of handling input events. i
> will try to think a little more about which of our two schemes really
> does this.

That depends on what you want to do, I guess. (And on the quality of the
implementation itself, of course. Crappy code can kill any design...)
Basically, my system should be more efficient if you really want high accuracy
without decreasing the buffer size. But if ultra low latency is the main goal,
my system will probably only result in a little more flexibility at a rather
high cost in the form of overhead.

However, I think that when used in situations where "normal" kind of latencies
are good enough, this flexibility will be very good for the usefullness of the
plug-in API. Frankly, what's most important; a few % of CPU, or the ability to
use a common plug-in for just about anything, and not forcing end users to
fiddle with multiple, incompatible systems? True, the extra transparency that
a buffered event system provides is mostly of interest to high end users, but
that's not the only point with it.

[...]
> >Hmm... To me, this sounds like you're actually talking about my system.
> >Certainly, your plug-ins don't know about real time in the normal case - but
> >how do you guarantee that the time stamp is the same for all plug-ins during
> >one cycle, if "events" are allowed t take effect in the middle of the cycle?
>
> because there are no "events": parameter updates are handled in a
> single step, by a thread that is not running the DSP simulation
> (typically a ui-thread or an event-manager thread). so time stamps on
> parameter changes simply don't exist. my system doesn't use timestamps
> at all. there are no timestamped buffers, no timestamped events, no
> timestamps for anything at all.

Exactly.

> the notion of time is not based on "timestamps", but is synced to the
> DAC.

Yes, but not with sample resolution...

> if a plugin is executing in any given cycle, then its query of
> the current time will always return the same value. that doesn't mean
> there have been no events - its means its busy generating the same
> output buffer (and working on the same input buffer) as everybody
> else who will be executed during that cycle.

"will be" is very important here.

> i originally foolishly thought i could get away from this scheme,
> since it was something in Csound that i hated. after several weeks of
> numerous different approaches, i decided that vercoe and friends knew
> something after all :)

Well, they certainly knew how to design a simple and *efficient* system! :-)

> just to reiterate: Quasimodo makes a huge distinction between those
> events that require a change in the program the DSP simulation is
> running, and those that do not. The former get sent to the DSP, and
> are processed at the beginning (or end, depending on how you look at
> it) of every control cycle. All other events are handled
> asynchronously without any communication with the DSP thread at all.

Yes, but that's a slightly different discussion. Of course, modifying the net
in real time is very useful, and my system will allow that as well. However,
that has nothing to do with my event system, other that that it might be done
by sending events to the engine itself. Rerouting the signal and event flows is
still a low level job done in the engine.

> and i know that you know this, but the system i'm describing is
> implemented, fully functional, and about to go to release 0.1.7 this
> weekend :)

Yes, but does that mean it's the perfect solution? ;-)

//David

 ·A·U·D·I·A·L·I·T·Y· P r o f e s s i o n a l L i n u x A u d i o
- - ------------------------------------------------------------- - -
    ·Rock Solid David Olofson:
    ·Low Latency www.angelfire.com/or/audiality ·Audio Hacker
    ·Plug-Ins audiality_AT_swipnet.se ·Linux Advocate
    ·Open Source ·Singer/Composer


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:27:12 EST