RE: [linux-audio-dev] exploring LADSPA

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: RE: [linux-audio-dev] exploring LADSPA
From: Brad Arant (barant_AT_barant.com)
Date: Thu Aug 14 2003 - 01:40:06 EEST


Hello Gang,

Would like to respond to Pete Yadlowsky...

|I'm new to this mailing list, though not especially new to computer music.
I
|was heavily involved in it some years ago, mainly on the NeXT platform,
then
|fell away. Out of curiousity, I recently decided to look around and see
what
|was available today for Linux, audio-wise.

I myself am dabling with the realm of audio under a 2.6 pre-test kernel
right
now on a linux system that I have been compiling my own source code on from
1997.
I have had great success and would like to share my thoughts and experiences
with you since I think that is the purpose of this discussion group. I have
also
developed my own graphical object oriented X Windows widget set using C++
that is
quite effecient and simple so that I could get some things done. It has
slowly
been evolving to something quite nice.

|One of the items I found was LADSPA. "A standardized interface for audio
|plugin units carried in shared libraries," thought I. "Interesting idea." I
|took a closer look at LADSPA and, like any happy programmer, decided that
|there are some things about it that I'd do differently. So, to flesh out
and
|test my ideas, and just for fun, I proceeded to build a LADSPA-inspired
|plugin system of my own. I'm writing now to present these ideas in the
event
|that someone may find a few of them useful, and to perhaps contribute to
|LADSPA's evolution:

I as well like the use of shared object libraries since it is not practical
to
statically link every conceivable modue into a large executable. Modularity
is very important for expansion but it must appropriately planned to ensure
that control over the process is maintained.

|- I've done away with the distinction between control signals and audio
|signals. I understand the performance gains to be had by computing one
class
|of signals less often than another, but I feel this is a hold-over from
days
|when computers were much slower than they are now. In my ideal system,
|signals are signals and any signal should be potentially applicable to any
|purpose. I don't want to be bothered with control vs. audio, either
|conceptually or in actual code.

Early on I adopted this approach but have changed my ways as I traveled down
the
path for a few reasons. Actually, I have made the distinction of a control
signal
versus an audio path for the sake of patching. A control signal in my system
is
primarily a single channel signal but has all the characteristics of a
single
audio channel. The audio channel is actually a stereo 2x channel. When
patching
the audio processes together it has helped to have this distinction so that
I do not have to manage stereo audio channels as separate "control" paths. I
have
a module that will take a single "control" channel and allow you to pan is
across
into a "audio" channel. Likewise I have a module that sums the two channels
of an
audio path into a single "control" channel. It has actually been less of a
bother
this way then trying to deal with all those single channels in a standard
audio
processing chain and is actually a necessity when dealing with stereo reverb
and
other effects that use a stereo signal path (like rotary speaker emulation).

I would also like to say that I have looked at the Jack and LADSPA signal
paths
and I believe they are single precision floating point numbers. I personally
have
adopted the double precision format and have done extensive benchmark tests
and
have found no degradation in performance but I can hear the difference
slightly
on some of the sounds I have created (extremely minor differences). I have
found
that most of the C library routines that I use for math are double precision
and
not being the most adept programmers, could not see doing it any other way
so I
did some testing and I found the double to be the best way to go with no
sacrafices.

|- Somewhat related to the item above, a plugin's run() method computes
exactly
|one sample at each call, not a block of samples. This is again a matter of
|conceptual simplification. I don't want the individual plugin to have to
know
|anything of process iteration; that job is for the containing
infrastructure.
|Also, some years ago I started working on some computer synthesis software
|and found that when units ("plugins") computed samples in blocks (instead
of
|one at a time), there was a strange behavior when these units were patched
|together in looped delay line configurations. As I recall, gaps would
appear
|in the audio output, and these gaps would grow in length as the loop
|proceeded. I don't remember if I ever discovered the exact cause, but I
think
|it had something to do with the relationship between the length of a block
of
|samples and the length of the delay line. Maybe I was doing something
wrong,
|but going to a one-sample-per-run process made the problem go away. I
wanted
|the flexibility of being able to patch units together in any sort of
|topology.

I would like to comment on this as well. The audio and signal path objects
that
I have described actually have the number of samples contained within a
packet
object and these are passed between the objects. The reason for this is so
that
I can tune the number of samples in each packet to the job at hand. When an
audio
or control packet is "used up" then the run() function will fetch another
and the
size value is adjustable based upon a system value but can also be adjusted
at the
module level. My reasoning for doing this is to provide a mechanism to
balance the
speed of a patch against its latency characteristics. I have found that a
single
sample has the lowest latency, especially in feedback loops, but the
overhead in
evoking the run() through the signal paths effectively reduces the
throughput. By
enabling a small buffer on each packet, a single run() request can queue up
multiple
samples which quickly improves the performance of the patch on the process
but
introduces some latency into the system. My reason for a variable here is so
that
I could primarily experiment with acceptable and unnoticable latency versus
processor
overhead.

A note on looped delay lines. I have not experienced any strange side
effects of
buffering multiple samples in a looped delay with the implementation I
describe.

|- Every input port is a mixer/modulator. Since the operations of mixing and
|modulating (multiplying) signals together are so often needed, I decided to
|build them into plugin input ports. A given input port can accept any
number
|of connections from "feeders" and either mixes or modulates their outputs
|transparently, according to the port's configuration. I believe this
|simplifies use of the system and eliminates the need for a special
|runAdding() plugin method.

This is an interesting concept to me and in many cases is an issue of
interface
design as the implementation still results in signals added. On my
interface, if
I connect two outputs into a single input then the interface places a little
"plus"
sign in front of the input port. By clicking on the plus sign (it has a
little
circle around it as well) it will open up a module with a volume slider for
each
channel connected and there is a button to change the "+" into an "X" for
modulation. Internally though, I create a separate module that is
"auto-patched"
into the patch for two reasons; 1) I dont incur the overhead of a summation
or modulation process if there aren't two signals to contend with and 2) I
do
not burden the module code writer with the additional task of dealing with
the
process of summation or modulation of multiple inputs. It has just greatly
simplified the code process and it is an interface issue as far as I am
concerned.
Also, in my system, attenuation or volume control is achieved by this mixing
process so you can invoke a single channel mixer for the purpose of volume
control
where the volume is actual accomplished via a modulation ("X") process from
an external control value. A "1.0" is normal volume, ".5" is half volume,
etc.

Thanks for sharing your ideas, I hope I have provided some insight and
thanks for
giving me the opportunity to listen to your ideas and respond.


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Thu Aug 14 2003 - 01:41:37 EEST