Re: [linux-audio-dev] plug-in design space

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] plug-in design space
From: David Olofson (audiality_AT_swipnet.se)
Date: pe elo    27 1999 - 23:17:36 EDT


On Sat, 28 Aug 1999, est_AT_hyperreal.org wrote:
> Eli Brandt discourseth:
> > How about a couple of larger-scale design questions before we get down
> > to bits and bytes?
>
> Basic motivation is a really important question. In a separate
> posting I described mine: frustration with all the wheel re-invention
> going on. That definitely leads to different criteria than Designing
> the Right Architecture does.

Yep, prioritizing design over finding out what is actually needed results in
nice solutions addressing non-issues... Everything must be kept in mind at all
times during the design phase.

> > * Is this for real-time streaming, non-real-time editing, or both?
> >
> > I vote a big "both".
>
> I second that! Wide variation along this dimension is central to our
> community.

Both. And it shouldn't be all that hard taking a pseudo real time system (*)
down to non real time. It's doing it the other way around that's hard or
impossible; yet, people are trying over and over again...

(*) By pseudo real time, I mean that we're not working with the full time
resolution, not even with Audiality/RTLinux. (Well, you *could*, but... what a
waste of CPU cycles!) That is, we're not going to catch one sample, process it,
and then output it. We're dealing with buffers, which means events need time
stamps so that we can tell exactly where they belong. The positive side effect
of this is that timing is moved to the data stream position domain. Non real
time processing comes for free, that is.

> > RT streaming pretty much requires the use of
> > data-pull, walking backwards up the graph:
> > > The algorithm is: "I need X samples. How many input samples do you need
> > > for each channel?"
> > But the answer may be "I dunno, I'll have to see what the values of my
> > inputs are." A voltage-controlled time-warper, for example. This
> > works fine in non-RT, where it's more natural to do data-push down the
> > graph. Support this?

Yep. As long as you can specify how far off the "now" position you'll get,
there's no problem.

The next step is using data streamer that supports bidirectional streaming as
your data source. As long as your plug-in can tell the streamer what to load in
time, the data will be there.

Time travelling isn't planned, though. ;-)

> I think we need to support both even though both might not be used in
> a given app. My focus is a spec that doesn't get in people's way. :)

And I don't think it will. My skip_behind/look_ahead idea should make quite a
few tricks possible, but you can just ignore those parameters if you're happy
with the VST style approach.

> > * If non-RT, can output keep running after input has been used up?
>
> Yeah, we need to specify a drain method. This could be the same as
> the process method with a 0-size inbuf.

My event system again: Send an event to the plug-in, telling it to say when
it's internal state is drained, and the output will remain silent. The plug-in
will just keep this in mind, and send a reply event when it decides it has
nothing more to output.

> > * What about sampling rates?
>
> Well, I expect we'll have fixed-rate and variable rate plugins. I
> generally code to variable myself.

It's pretty useful in some cases. For example, if all plug-ins used supports
variable sample rate, you can have a system adjust the sample rate after the
CPU load. That is, you'll get a slight treble drop instead of stuttering,
freeze, and emergency engine shutdown if you overload.

> > * What's static and what's dynamic?
> >
> > It would be nice to support variable numbers of inputs and outputs: An
> > n-to-m matrix mixer, for example, where n and m are determined upon
> > instantiation of the prototype.
>
> I'm fond of this sort of thing..but it sure gets complex. :)

The engine would have to change the size of the buffer pointer arrays passed
to the process() function. It would probably be easiest of you can only change
the number of channels - not their order. I think the most complex parts will be
modifying the processing net, and keeping track of things like "the channel
removed was NOT the last one in the list."

> > * How are feedback loops handled?

See my other post on that. Perhaps I should expand on the engine's part in it as
well...?

> > * Is each `wire' mono?
> >
> > Here's where I stepped in. My leaning is towards mono wires for
> > simplicity. Performance is not clear to me.
>
> It can also complicate memory management if all multi-channel code
> has to be rewritten to merge its input streams to become a plugin.

I think supporting both is a good idea for that reason. (And about only that;
mono is the way to go from the design perspective, IMO.) So, the rule is that
you should design your plug-ins as with mono signals, but if you need other
formats for performance reasons (SIMD code for example), you should have the
engine do the conversions for you.

A few point with this...

* The engine need not do any conversion if the
  plug-ins connected happen to fit together.

* Conversion can be optimized in the engine.

* Complexity is moved away from the plug-ins.

> Eli, this is one great set of points/questions! It's posts like this
> that will create a good spec. :)

Agree! This is exactly what I wanted. :-)

//David

 ·A·U·D·I·A·L·I·T·Y· P r o f e s s i o n a l L i n u x A u d i o
- - ------------------------------------------------------------- - -
    ·Rock Solid David Olofson:
    ·Low Latency www.angelfire.com/or/audiality ·Audio Hacker
    ·Plug-Ins audiality_AT_swipnet.se ·Linux Advocate
    ·Open Source ·Singer/Composer


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:25:53 EST