Subject: Re: [linux-audio-dev] LADSPA run() blocksize
From: Paul Barton-Davis (pbd_AT_Op.Net)
Date: Fri Nov 17 2000 - 15:58:46 EET
>> does anyone have a solution as to how a LADSPA plugin can avoid
>> allocating any internal buffers in run(), when it needs to know the
>> maximum value of the argument to run() and/or run_adding() ?
>
>I was thinking about this on my way to work, but I couldn't think of a
>situation when you needed to know.
>
>You can always just chop input[] into <= buffer[] sized chunks to do your
>processing. Might be a bit slower, but *shrug*.
*shrug* !!?!! :)
Its an optimization yes, but then we need optimizations like a fish
needs water.
The host knows what its maximum block size is - if the i/o
source/destination is an audio device, it will be the fragment size,
and analogies exist for hosts doing network i/o instead. It may call
plugins with a frame count lower than this, but it will never call
them with a larger value.
So I think we should just allow the host to call set_block_size() or
alternatively pass the value into activate(). Then they can make sure
their buffers are the right size and they don't have to use a double
loop within run/run_adding. The semantics are simple:
* its never called in a context considered "hard-RT", so
using brk(2)-based or other non-deterministic functions is fine.
* specifically, it must be called by the host *outside* the
scope of "activate(); run(); run() ....; deactivate()".
* the host will never call the plugin with a larger nframes
value without first calling deactivate() and then
set_block_size() again.
What say you ?
--p
This archive was generated by hypermail 2b28 : Fri Nov 17 2000 - 16:42:43 EET