Re: [linux-audio-dev] Re: my take on the "virtual studio" (monolith vs plugins) ... was Re: [linux-audio-dev] ardour, LADSPA, a marriage

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] Re: my take on the "virtual studio" (monolith vs plugins) ... was Re: [linux-audio-dev] ardour, LADSPA, a marriage
From: David Olofson (david_AT_gardena.net)
Date: Sat Nov 25 2000 - 07:11:04 EET


On Tue, 21 Nov 2000, Paul Sladen wrote:
> On Tue, 21 Nov 2000, Benno Senoner wrote:
>
> > Yesterday I had a chat with David Olofson about these issues:
> >
> > we came to the conclusion that although some object oriented model
> > will be required, we need to keep it as simple as possible
> > (keep it simple stupid).
> >
> > And for this C++ may not the right way to go.
> > As we all know C++ is a difficult beast to handle and you need to
> > know what you do when talking with abstract objects AND want
> > RT-safe behaviour.
>
> Anything that /depends/ upon a persific language is fundementally flawed..

Yep, that's why C++ style OO isn't very good to have around a low
level API. We could live without the overhead of hiding it from less
powerful languages, and interfacing it with other high level
languages based on different philosophies.

> if you have an API that ports easily across all languages, then you know
> that you have come up with a good answer... Why /shouldn't/ I be able to
> knock up a quick plug in Python etc...
>
> <abstract rant>
> OpenGL is an example of a language that has successfully be bound to
> many, many lanuages... and while not all of us (ie, me) may not have had
> the benifit of 10years experience in the field, and various previous
> attempts, SGI did have when they designed OpenGL.
>
> It has taken Microsoft *8* versions of Direct3D to do all the things that
> could be done with OpenGL 1.0!!!!

Well, it appears that Microsoft did it that way deliberately, which
wouldn't be too surprizing, as that's the way they keep others from
supporting their "standards"... Having seen both sides, it's hard to
avoid viewing Microsoft APIs as marketting and control tools, when
comparing to real APIs like OpenGL. They do things that look nice...
to anyone who doesn't know much about design.

Now, one *could* suggest that SGI is orders of magnitude more
competent than Microsoft, but I consider that more likely to be M$
bashing than a serious argument. (Let's say half an order of
magnitude! ;-)

> > (ever tried to write a Qt application in C ? no way .. (almost) ).
> >
> > Plus this delegating the establishement/destroying of connections to
> > a separate thread is not going to solve our problems.
> But what it does do is not disrupt the RT process when it /needs/ to be
> running. The server needs to be running nonRT, and the plugin chain, and
> only the plugin chain, RT.
>
> > There may be cases where we want to make/modify connections in realtime
> > without glitches and other sources of non-determinism and here we need
> > to be VERY careful.
> What needs to happen is that the nonRT needs to do all the calculation,
> and then place all the changes into a lock free FIFO for the RT chain to
> pick up after/before processing the next block.
>
> After is better for realtime-online schemes, where latency is the worry,
> whereas before is better for offline processing using big blocksizes. P

If the events that trigger the net rearrangements are real time (for
example MIDI input), everything needs to be done in the RT thread.
Many soft synths work that way, for various reasons.

> > David and I discussed the SMP issues too: how to spread the load among CPUs,
> > reduce the interdependencies and try to keep latency as low as possible.
> > We haven't solved the problem fully, but what's for sure is that we need to
> > make certain tradeoffs about what kinds of plugin trees can be processed.
>
> The only option that I've come up with is passing a "timeframe" parameter
> around too.... example:
>
>
> +======================+
> | P1 |
> +====v============v====+
> V V
> +====V====+ +====V====+
> | P2 | | P3 |
> +====v====+ +====v====+
> V V
> +====V============V====+
> | P4 |
> +====v============v====+
> V V
>
> P4 (in my system) makes requests to P3, and P2, who in turn make a request
> to P1.

You have to make sure that the "requests" are nothing like IPC, or
you'll end up with (at least) twice as many context switches as with
a non-recursive model. That's when this gets a real problem, as
context switches not only mean overhead, but also the risk of getting
scheduled out for too long.

> (if P1 is capable, an optimisation to this could be done, knowing that P1
> will have been called by P2, it knows that it doesn't have to tell P3 to
> call P1 /aswell/... but this is an optimisation).

A quite tricky one to that - how can P1 possibly know that P2 and P3
will wake up P4, and only P4? Definitely nothing one would want
inside plugins.

> -*- But!
>
> Say that P2 is a reverb or band-pass filter, and caches the last "n"
> samples of input.... and the two chains get call "out-of-order"...
>
> Ok, so we create two descrete /chains/ or instances?... But what happens
> about the second instance of P2 picking up the first instance of P2's
> cached input... this needs a timeframe to allow for this type of things,
> and one thing that it can do is to block until the previous interation has
> finished processing.
>
> ie it has to wait until it has processed all it's input,and then can
> proceed and unblock onthe second channel.
>
> Problem in the sentance; the word "block"... don't like this.

No way around that. If there are no runable plugins, you have to
block, or spin. The former is generally a better idea, but here, we
have the problem that we're operating close to the worst case
scheduling latency. We can't afford to collect "chances of hitting
latency peaks", that is, as that will eventually result in occasional
drop-outs under heavy, but normally safe load.

> > The main issue is the interdependency problem:
> > if one CPU has to wait for the results of the other then you must either halt
> > execution on that one, or use some kind of pipelining which increases latency.
> > And with a few of such interdependency places, low-latency vanishes.
> >
> > The tradeoff would be to process individual independent chains which can be
> > spread among the available CPUs (or nodes in a cluster).
> > And to allow only a small number of interdependency places.
> > (that would be the final dowmix/routing of all tracks or some common
> > FX processing)
>
> I guess what the plugin needs is a flag stateing whether it relies on
> previously passed data, or whether it is **completely** dependant on it's
> input and it's input /only/. GCC has a __keyword__ for this, so there is
> a word that means it. (GCC has nothing to do with the problem at hand
> before I get anyone confused!).

The *amount* of state data is more interesting than knowing whether
or not there is state data at all. The reason for that is that you can
actually *win* if you let a plugin migrate, as long as the plugin's
state data is smaller than the audio buffers that would otherwise
have to be transferred.

Note: This applies to chains just as if a chain == one plugin, which
      means that the state data adds up, while the streams for the
      inputs and outputs remain unchanged. That's one of the reasons
      why this method probably won't work too well in most real life
      applications.

//David

.- M u C o S -------------------------. .- David Olofson --------.
| A Free/Open Source | | Audio Hacker |
| Plugin and Integration Standard | | Linux Advocate |
| for | | Open Source Advocate |
| Professional and Consumer | | Singer |
| Multimedia | | Songwriter |
`-----> http://www.linuxdj.com/mucos -' `---> david_AT_linuxdj.com -'


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Sat Nov 25 2000 - 10:28:22 EET