Re: [linux-audio-dev] LADSPA run() blocksize

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] LADSPA run() blocksize
From: David Olofson (david_AT_gardena.net)
Date: Sun Nov 19 2000 - 11:17:58 EET


On Fri, 17 Nov 2000, David Benson wrote:
> > >You can always just chop input[] into <= buffer[] sized chunks to do your
> > >processing. Might be a bit slower, but *shrug*.
> >
> > *shrug* !!?!! :)
> >
> > Its an optimization yes, but then we need optimizations like a fish
> > needs water.
>
> The problem is that the optimizations are so small, that they seem
> like pure bloat. In particular, *most* plugins are going to
> have their time dominated by the iterations of the per-sample loop,
> so what you do every 1024 samples doesn't seem too important
> (if your plugin chose 1024...)

We're not dealing with anything near 1024 samples when doing RT
processing, and as to the magnitude of the optimization; needing an
extra buffer + a mixer plugin is not exactly insignificant, even with
1024 samples/buffer, especially not on extremely high core clock
CPUs, which tend to be seriously slowed down by cache misses.

> Actually, I sort of think you should just have your host override
> malloc() with something that satisfies you more... (i guess something
> that does preallocation since you feel brk will break you. you might
> try the `hoard' i saw on freshmeat a while ago.)

This is complicated, messy, and unsafe, as you don't know anything
about the allocation sizes as functions of the buffer size. You need
totally generic dynamic allocation RT that is, and that's a science
where really good solutions are yet to be seen...

I certainly don't want to see anyone being forced to go there
because of a trivial API matter like this. It's for very complex
systems, where you *really* need dynamic allocation.

Besides, *any* RT memory manager is going to need a locked memory
pool. How big?

> For this and the run_adding() argument, I'd love to see measurements
> (your intuition and mine differ on their importance)
>
> Summary: I'm worried very much about adding/recommending plugin complexity
> that could be done (with say >99% efficiency) by the host.

I'd say a proper API fix *removes* a great deal of complexity both on
the host and plugin side. I can't see where the problem is.

Or; if you plugin needs internal buffers, it HAS to allocate them
somehow, and that's not before it knows how big the buffers need to
be. No problem in non-RT hosts, but most people coming here seem to be
having problems with *RT* work - that's the major problem that MacOS
or Windows cannot solve.

So, the plugin will have to deal with it somehow, and that means
adding an extra loop. Note that checks, extra run() implemenations
etc won't work, as you don't know anything before you're *in* run()!
No way to optimize this away, that is.

As to the host side solution, see above, and add that this locked
pool won't fit in the CPU cache... The net result will be that you'll
see CPU load peaks for the first few cycles (or longer, depending on
what the host is doing with the buffer size), until all plugins have
reinitialezed themselves to deal with the biggest buffer size they've
seen so far.

No way I'd ever put anything like that in a real hard RT system; and
as it just adds complexity for no real reason, I wouldn't want it
anywhere at all, in any kind of system.

(As to MuCoS, I've actually made bigger trade-offs than avoiding an
extra API parameter in order to stay away from real dynamic memory
allocation.)

//David

.- M u C o S -------------------------. .- David Olofson --------.
| A Free/Open Source | | Audio Hacker |
| Plugin and Integration Standard | | Linux Advocate |
| for | | Open Source Advocate |
| Professional and Consumer | | Singer |
| Multimedia | | Songwriter |
`-----> http://www.linuxdj.com/mucos -' `---> david_AT_linuxdj.com -'


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Sun Nov 19 2000 - 15:46:02 EET