Re: [linux-audio-dev] extending LADSPA

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] extending LADSPA
From: Tom Pincince (stillone_AT_snowcrest.net)
Date: Wed Oct 25 2000 - 07:02:52 EEST


>the case at hand as i understand it. i'd be sympathetic to the notion
>that *all* audio editors should really be using EDL's and doing
>temporal sequencing with them, but that just isn't how the vast
>majority of Linux sound file editors operate at this time.

You beat me to the punch line! I'm impressed.

This notion may be too large a leap for many, and I don't recommend
blind faith, so let's look a little deeper.

I can't do fades on my host with LADSPA. Solutions;
1) Add functionality to LADSPA (LADPA?)
2) Add functionality to the host
So far this thread has treated this as a deficiency in LADSPA. I see
this entirely as a host issue. Take this possibly useful analogy, light
bulbs (ok, so they screw in instead of plug in). I just bought a new
light bulb and put it into my lamp but no light is coming out. What do
I do? Check the lamp. Light bulbs are completely useless without a
lamp, but I don't want to see manufacturers treat this as a problem, and
eliminate it by hardwiring bulbs into lamps. I am happy to have fully
self contained light bulbs that are both infinitely portable and
completely useless, relying on the host lamp for all functions. I would
not be happy if I had to buy a lamp with every bulb. So what does this
have to do with plugins? Plugins are completely useless outside of the
context of the host. I would be happy to see plugins that did nothing
but process one block of audio data based on one set of control data,
and only allow new control data at a maximum rate equal to the block
rate, regardless of the size of the block. The plugin is completely
dependent on the host to set the block size, provide the control data,
and provide the audio blocks.

>>for instance, when i work with an editor (like peak on the mac) and i
>>want to do a fade, i simply select some audio and run the fade in
filter
>>on it. i'm not thinking about how long the selection is or how many
>>samples there are or anything like that. and depending on how the gui
is
>>set up, i may or may not have any way to get at that information
anyway.
>>what i do know is that i've selected (visually) the exact region that
i
>>want faded. i guess that an intelligent host program could fill in a
>>default value for me, one that is equal to the length my current
>>selection. but there's no way to enforce that behaviour.
>
>The fade functions in Peak, SoundMaker, SoundEdit 16, and SDII are not
intended
>to process streaming audio in real-time but get the number of samples
to
>process from the selection. Without explicitly typing the data into the
plugin
>parameters window you are supplying the plugin with the extra data by
selecting
>it, the plugin is getting that data from the host

Right. Any host that allows for the graphic selection of regions is
providing the exact begin and end points. Looking at fades in
particular, there are three main ways to accomplish this. In the first
two methods the user defines the fade region in the host and chooses or
draws a fade shape. The region is almost always chosen graphically by
dragging the cursor from the fade-in end point or the fade-out start
point to the end point of the region. Method 1 is destructive and
rewrites the fade portion to the original file. Method 2 creates a new
file containing only the fade and connects this with the original file
with an edl. Method 3 uses the host as a sequencer and sends gain
automation control data to an amplifier plugin. Method 1 was used in
editors written back in the days of 25 Mhz machines with 8MB of RAM when
all processing was off line. Some programs still use this approach, but
it is getting rare. Methods 1 and 2 have nothing to do with rt.
Methods 2 and 3 are current and sometimes an editor offers both. Only
method 3 is rt. Since method 2 only works with edl based hosts, and I
pointed out that edl implementation is a form of sequencing, then the
thing to do is to give all hosts sequencing capabilities. Allowing the
host to define block size and feed control data sequencer style to a
simple plugin solves all problems and is probably much easier than
creating smart plugins. Even if you wanted to create separate fade
files for an edl, the host could provide all necessary info for a simple
plugin to accomplish this. Personally I have been blown away by the
power of non destructive edl editing. Can anyone identify a situation
that can't be addressed this way? (I don't mean dc offset, hiss and
noise removal, normalization or other functions that require large
amounts of analysis before processing)

I am not familiar with Sweep. Is it purely destructive? If it isn't,
does it create new files equal in length to the entire old file or can
it work on regions. If it can piece together regions for playback, it
can be modified to do edl's and sequencing. Non destructive editing is
the modern way, and is used in programs like Samplitude and Pro Tools.
Both of these programs recently added MIDI sequencing, which illustrates
that edl editors really are sequencers. RT plugins are also the modern
way. The only popular batch functions I know of are for soundfile
format conversion.

Tom


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Wed Oct 25 2000 - 07:26:29 EEST