Re: [linux-audio-dev] Synth APIs, GluttonAudio

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] Synth APIs, GluttonAudio
From: David Olofson (david_AT_olofson.net)
Date: Fri Dec 13 2002 - 17:30:30 EET


On Friday 13 December 2002 14.06, Sami P Perttu wrote:
> On Thu, 12 Dec 2002, David Olofson wrote:
> > Still, that does not prevent a gread deal of overlap, although
> > XAP gets more and more expensive the more you "abuse" it - just
> > like full audio rate or blockless systems tend to be less
> > efficient when used more like "VSTi style" synths.
>
> I thought XAP was intended to replace LADSPA (in addition to
> extending it).

Well, it might - unless people consider XAP too complex for their
plugins.

Obiously, we'll try to make XAP as simple, efficient and easy to use
as realistically possible - but given the requirements, it's not
realistic to make it as simple as LADSPA.

> > > about integrating audio data into event queues as well? These
> > > events would contain a pointer to a buffer. There - a single
> > > [...]
> >
> > I mentioned in another mail that I did consider this for MAIA. I
> > also mentioned that it brings implementational issues for
> > Plugins... (Asynchronous buffer splitting is nasty to deal with.)
>
> What is that? If a plugin got an audio port event it would just
> update a pointer. Past samples should be kept in temporary
> variables and future samples should not be looked at.

Correct. But consider this example:

        process(..., int frames)
        {
                int s = 0;
                while(frames)
                {
                        ...process events...
                        for(samples until next event)
                        {
                                out[s] = left[s] + right[s];
                                ++s;
                        }
                }
        }

Now, if you get a new buffer for left in the middle of the buffer,
what happens...? This is what I'd call "non-obvious implications".

Sure, you can deal with this by having one index for each buffer, or
simply incrementing the pointers, but that impacts performance in the
inner loops. (Yes, incrementing pointers is a bad idea on most
reasonably modern CPUs! They're designed for indexing.)

I'm pretty sure most plugins would crash the first time a host
actually makes use of this feature, unless possibly if you keep
yelling at all plugin developers all the time.

IMHO, this is complicating the API for no good reason.

> > My conclusion has to be that for both complexity and performance
> > reasons, you're better off with physical Audio Ports, that
> > effectively are nothing but buffer pointers, maintained by the
> > Host. (LADSPA style.) This is simple and effective, and avoids
> > some complexity in Plugins.
>
> Point conceded, although some of your arguments apply to controls
> as well.

Yes, but controls are *intended* to be sample accurate, and for good
reasons. Deal with it, or use LADSPA.

There are (IMNSHO) no good reasons whatsoever to support sample
accurate buffer splitting on a per-Port basis. (And I'm the one who
introduced this very concept when MuCoS/MAIA was discussed...)

> I don't see anything physical about audio ports in XAP,
> although it might be better if they were actual objects.

Well...

        typedef XAP_sample *XAP_buffer;

Do you need an object for that? :-)

Point is, plugins will probably need their own per-Port data
internally, and since we cannot know what they need, there is no
point in suggesting anything in the API. Let Audio Ports be abstract
objects, and just pass a pointer to an array of buffer pointers when
you connect an array of Audio Ports.

> > Being that it started out as little more than a list of
> > "commands" and arguments (as an alternative to a bunch of
> > hardcoded C calls), it is currently *very* imperative.
> >
> > However, due to the nature of the underlying audio rendering API,
> > it would not be far off to interpret the scripts as net
> > descriptions instead of step by step instructions. Just add
> > plugins instead of running commands, and then run the result in
> > real time.

BTW, I am indeed going to look into some sort of "process only this
range" feature for the rendering engine. Some sounds take a while to
render, and there's no point in waiting for the whole sound to render
when you're just tweaking the attack.

> > I have seriously considered this approach, but I'm not sure I'll
> > do it that way - at least not as an implicit, only way of using
> > the language. (The language itself is indeed imperative; this is
> > more a matter of what the extension commands do.)
>
> Okay, I can see how that would be useful. I have a graphical
> network editor in GluttonAudio (I've just decided to name it that)
> but without any scripting it's not flexible enough. When you start
> adding script plugins, however...

Yeah... I started in the other end, basically. What I have in mind
doing next is a sort of "smart" text editor, that understands the
commands of the rendering backend, and lets you view and edit them
graphically, as little panels that replace them in the source. That
way, you can edit envelopes and stuff, even inside functions and
control flow statements.

[...chained ramp events...]
> Logically, controls are like ports: they have a definite value at
> each point in time.

You *could* say that - but don't forget that many plugins are
oversampling internally...

Even Audiality's off-line engine is affected. Envelopes generate
audio rate output, while some oscillators and filters are oversampled
internally. As a result, the filters and oscillators see zipper noise
on their inputs, unless they filter or ramp interally in some way,
just like they should do with the audio input.

Now, if control was seen as continous rather than sampled, it would
just have been a matter of applying the ramp events at the internal
sample rate. More accurate, easier, and cheaper.

> So it would make sense to have a
> XAP_control_variable type or something whose value could be set
> (via a macro) from an event and whose value could be updated (also
> via a macro) after each sample is processed.

Yeah, that's an idea I considered for MAIA; wrapping most of the
standard event handling in the plugin SDK. It could work for
oversampled inner loops as well, if done right. (Just a matter of
being able to scale the "duration" parameter before the "set up ramp"
macro gets at it.)

> Whatever the sections end up being, spline calculations should be
> done by the host.

How? It's not the host that sends these events in general; it's other
plugins. The host never touches the events in the normal case.

> What the plugin would see are just polynomial
> coefficients. With third-order sections you get continuity of first
> derivatives, etc.

Events should probably contain something less specific that
polynomial coefficients. Many plugins can't use them directly anyway,
since they have to ramp/interpolate internally for performance
reasons.

OTOH, calculating a few points from given polynomial coefficients
isn't all that hard... Just multiply with x, x*x and x*x*x, where x
is the offset from the start of the section to the point you want, in
sample frames.

That said, the problem is that you cannot get away without actually
implementing this. If there are cubic interpolation events, all
plugins must support them, or there will be lots of little converter
plugins everywhere...! :-)

[...]
> This is what I meant with optimizations. If you have a biquad with
> a linearly changing cutoff frequency, for instance, you can
> optimize away calls to trigonometric functions inside the inner
> loop without resorting to approximation; just use the old "spinning
> point on the unit circle" trick. Hmm, I realize now that the same
> can be done with a polynomial of any degree but the computational
> benefit gets smaller and smaller. For this reason it might make
> sense to just stick to constant values and linear ramps. You could
> then have plugins that generated piece-wise linear approximations
> of splines and the like with any accuracy desired.
>
> So: I vote for constant and linear controls.

Well, you could just use the extra argument as "slope", to suggest
the desired slope when "target" is reached. So, for chained sections,
you get <value, slope> for start and end of each section. Either just
ignore the slopes and do linear interpolation, or do something more
sophisticated, to take the slopes in account as well. (Basic splines;
"smoothest path".)

//David Olofson - Programmer, Composer, Open Source Advocate

.- The Return of Audiality! --------------------------------.
| Free/Open Source Audio Engine for use in Games or Studio. |
| RT and off-line synth. Scripting. Sample accurate timing. |
`---------------------------> http://olofson.net/audiality -'
.- M A I A -------------------------------------------------.
| The Multimedia Application Integration Architecture |
`----------------------------> http://www.linuxdj.com/maia -'
   --- http://olofson.net --- http://www.reologica.se ---


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Fri Dec 13 2002 - 17:55:24 EET