Subject: [linux-audio-dev] VST2.0 , LADSPA1 and MuCos analysis
From: Benno Senoner (sbenno_AT_gardena.net)
Date: Fri Mar 31 2000 - 00:23:56 EEST
Hi,
Today I "digested" the VST2.0 PDF (BURP :-) ).
84 pages, I've read _slowly_ (yes several hours) in order to
not miss anything.
I am doing now some comparisons with LADSPA , MUCOS etc:
VST looks nice in my view, writing a plugin is quite easy
(look at AGain , ADdelay in the SDK).
Basically the difference between VST1.0 and 2.0 is that
2.0 comes with an event-system (mainly MIDI, but they allow
generic events too) , support of timing functions (internal
timing and SMPTE, MIDI-clock) which allows sample accurate
MIDI/audio syncronization, support of surround (mainly they
added a speaker-setup structure) and support for offline-processing.
PORTS vs buffer pointer arrays :
They do not use the PORT concept:
basically each plugin has a list of input and output buffer
pinters: float **inputs, float **outputs
that means in theory you are not limited to stereo support.
canMono() allows the feeding mono signals into plugins
which output more than one channel, this can in some cases
speed up things, since the plugin can optimize internal data flow.
comments: I like the port approach better since it is more flexible
than mere buffer pointers.
run() and runAdd() :
VST _requires_ process() ( = runAdd() ) to be present,
processReplace() ( = run() ) is optional
on LADSPA it's the opposite.
The "missing" method must be "simulated" by both VST and LADSPA.
VST to simulate run() , must basically zero out a buffer before calling a
runAdd()-only plugin, while LADSPA has to set up a temporary buffer
to do the adding.
I am not an expert how complex plugin networks should look,
(n. of plugs needing run() vs plugs needing runAdd()),
but I think VST wins here from a performnace POV,
since zeroing buffers is faster than using additional buffer to perform adding,
due to caching.
- PARAMETERS: ( in LADSPA these would be control ports)
Basically each plugin can provide an array of parameters,
and both can act as input and output value. (floats)
The plugin can provide the of parameters it wants,
and the host can lookup the parameter names, units and
there is even a
getParameterDisplay(long index,char *text)
where the host can let the plugin convert a parameter
into a human readable text.
That is a very good approach and IMHO much better than
then LADSPA's PortRangeHint properties like LOGARITMIC etc
for example the AGain plugin does:
void AGain::getParameterDisplay(long index,char *text)
{
db2string(fGain, text);
}
the internal parameter fGain is converted into a logarithmic value by
the db2string() function.
But if fGain were an integer value , you would only need to change this
in the above function, getting instant benefits on ALL hosts.
1:0 for VST :-)
VST always uses a range of 0.0 - 1.0 when representing parameters.
This makes it easier for the host to deal with the data, since
for example sliders or faders simply have to supply a 0.0 - 1.0 range.
in the case of LADSPA, the host has to check the RangeHints in
order to supply correct values to the plugin.
I think the 0 - 1.0 idea isn't that bad, but it's not a big problem having
no range restrictions provided that a RangeHint structure is present.
audio data always has to be in the -1.0 to 1.0 range on VST,
I don't like this restrictions much, since if every plugin enforces this
rule, you could loose some S/N ratio, because if we assume
that we have a Gain module with clipping support,
cascading 3 gain modules in sequence each with 60db gain
( 60 * 3 = 180) , the last module would clip the audio since
float only provides 144dB S/N ratio, inducing nice
distorsions.
But I think most of plugins do not clip therefore this shouldn't
be a big problem.
Anyway the "elegant" version is to NOT limit the exponent
that is the purpose of floating point.
For parametes it doesn't matter since the dynamic range of
the parameter is fixed. But in the case of audio it's variable.
PROGRAMS: basically VST allows a bank of parameter setups (presets) stored
into plugin, which the host can access using the setProgram(index) function
This makes sense for all sorts of plugins: for example a Hall plugin could have
several presets (small Room, Concert Hall etc)
I like this approach, since small plugins do not have to care about this can
can leave the function empty without adding complexity.
LADSPA: no (implicit) support for programs
AUTOMATION:
VST provides setParameterAutomated() calls to allow the host to record/playback
parameter changes in an easy way.
(VST plugin provides callbacks to notify the host/GUI for parameter changes)
I like this too, and if the plugin doesn't care it isn't forced to provide
these functions.
Automation is definitively needed in a professional host-plugin enviroment,
and Steinberg solved this in a simple way (it may not be the most flexible in
the world but it works well most of cases)
LADSPA: no support for automation
EVENTS:
VST2.0 added the possilibity for plugins to receive and send (send not
implemented in Cubase yet, I think) , events from/to host.
There is no way to send events from one plugin to another with this system.
The host to/from plugin event exchange model makes it rather simple ,
basically arrays of event structures.
Although it was meant to send MIDI data (from hosts to soft-synth plugins),
it's a generic system and you can send events of any size.
VST now comes with timing functions where the host can supply sample
accurate timing-information to plugins, so that soft-synths can
output MIDI data as audio with sample accuracy.
GUI ISSUES: VST2 provides a sort of crossplatform GUI library
which they ported to WIN32 , MacOS and Motif,
I think it's definitively worth investigating to port this lib to
Gtk / Qt.
final comments:
VST from a technical POV is nice and simple, although it has some limitiations,
which the die-hard developer will dislike a bit.
Notice that VST is very tied to C++, that means writing a C-only plugin
becomes quite difficult (maybe even impossible).
With LADSPA it's easy to write C-only plugins.
Paul Barton Davis is suggesting to go the VST way,
basically the API (headerfiles) are already here since there is a SDK for
the SGI. What we still need is a host that can run it,
(a SIMPLE host isn't that difficult to implement,
Paul already hacked toghether one in a few hours :-) ),
and the porting of the VSTGUI lib, to ease cross-platform plugin
development.
What I dislike here is mainly:
1) the license: Steinberg doesn't permit you to modify the API. ( similar to
the old Qt license problems)
2) some technical deficiencies data-types etc).
Let's see how the opensouce community deals with 1)
Anyway I welcome VST2 on Linux, since it will only help the
linux-audio getting better. (tons of plugins, especially the
commercial pro-audio ones)
For those still wanting a free GPLed API, MuCos is the way to go,
if we work togheter (as with LADSPA1), we can good results,
having an API which can _peacefully_ coexist with VST2.
(just as GNOME and KDE can coexist)
I think after LADSPA1 finalizes, David and me (and hopefully others will help
us) will continue to work on Mucos ( or should we call it LADSPA2, Richards ?)
goal of MUCOS : (my proposal, please comment)
AUDIO PART OF MUCOS:
- based on LADSPA1, hosts will be backward compatible with LADSPA1-plugins
- multi-datatype, multichannel (basically including some of my ideas)
- MUCOS hosts can run VST2 plugins (achieved by adding some VST-like features,
like parameter functions, automation etc)
EVENT PART OF MUCOS: ( David are you listening ??)
- flexible event system which should not be too complex
- VST2.0 event compatibility to run 100% of VST2 plugins ( audio + synths)
GUI PART:
my proposal would be to give to the plugin developer multiple choices:
- XML description ala Quasimodo
- VSTGUI (
Still any supporters out here ? :-))
Benno.
This archive was generated by hypermail 2b28 : Fri Mar 31 2000 - 02:00:48 EEST