Re: [linux-audio-dev] plugin format

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] plugin format
From: David Olofson (audiality_AT_swipnet.se)
Date: ti elo    24 1999 - 22:55:44 EDT


reactor/CTPmedia wrote:
>
> yo!
>
> i think it's high time we declared a standard for for plugins.

*heh..* I just had the idea that I should open up my design work on the
Audiality Plug-in API... :-)

As I'm still working on a layer for porting Linux drivers to RTLinux
(latest results: .4 ms latency with an AudioPCI card, rock solid), I
haven't yet got around to hack a serious spec. But I have quite a few
ideas from the last ten years or so I've been playing with audio and
code, and I have a rather clear picture of the basic implementation
details in my mind.

> should we copy GIMP's plugin system?

How would that map to hosting tens of plug-ins in a low latency real
time system? Haven't seen the spec (should take a look at that one too,
I guess), but I'd be surprized if it differs much from most plug-in APIs
of that kind. Not very well suited for audio, I'd guess...

> or use the system seen
> in x11amp/xmms?

No detailed knowledge, but it sounds more like it.

> is there any possibility to use/make VST plugins
> on linux?

There IS a VST 2.0 SDK for SGI... However, I consider VST still too
limited, and now (2.0) it already carries old legacy stuff. And it's 32
bit FP audio streaming only; exactly what I want, but perhaps not what
you had in mind?

> waiting for VST2?

Already here... Anyway, I certainly don't like their way of allowing
various system calls and things like that from within the plug-in code.
Stricter GUI/processing separation is needed for a really clean API, and
it's absolutely crucial in a hard real time system, as ANY system call
would break the real time performance completely! And under RTLinux,
most of those calls don't even exist...

> any new, groundbreaking ideas?

Naah... Not groundbreaking perhaps, but the Audiality plug-in API will,
among other things...

1. Allow plug-ins to inform the engine about inherent/unwanted latency
2. Allow plug-ins to tell the engine about their inherent/desired
latency
3. Handle sample accuracy time stamped events
4. Support control signals in the form of low frequency audio streams

1 means that the engine can compensate for inherent latency in certain
algorithms, so that the user doesn't have to keep that in mind. (Say,
you split a signal into "low" and "high" using a dividing filter
network, process the two new signals with different plug-ins, and then
merge them together into a full signal. Meep! Phase error, unless the
engine keeps track of the latencies caused by all plug-ins.)

2 lets the engine keep track of 1 without simply adding to the total
system latency, and is also required for supporting feedback loops in a
controlled fashion.

3 means that you can send events at exact times (for example changing a
preset on a plug-in), rather than processing all "events" between the
processing buffers. Plug-ins can chose to support this in the process
loop for maximum performance under heavy event traffic, or just tell the
engine to split the buffers with the precision it sees fit, when there's
an event to handle.

4 is first of all a way of reusing audio processing plug-ins for control
signal processing (useful in modular synthesizers); second, a way to get
nicer and faster plug-in code that pumping in masses of events would
result in.

Well, that's about what comes to mind right now... More ideas and design
stuff in various kinds of documents all over the place here. :-)

> with a nice standard plugin format, every linux audioware could
> multiply its power...

Yep. And I'm looking further. Audiality is not meant to be an API for
plug-ins and host applications - Audiality doesn't *have* host
applications in the normal case. Audiality will be a complete, general
purpose (not only audio), real time signal processing engine with
support for plug-ins and client applications. The engine is meant to be
a shared system resource, so hardware resources, plug-ins and streaming
databases can be used by multiple applications at once, if desired. That
is, if you run a soft synth, a multitracker and a MIDI sequencer,
they'll all be able to share the hardware, and they're automatically
synchronized with sub-sample precision.

The main Audiality engine implementation will run under RTLinux for
extreme performance and reliability, and will use drivers recompiled
into RTLinux drivers using the Driver Programming Interface I'm
currently hacking. This means that complete, existing Linux drivers can
be ported very easily, without even needing to adapt them to a new API -
I've implemented the most frequently used kernel calls as RTLinux safe,
context sensitive versions. (Bonus effect of the current version: The
drivers still work as standard Linux drivers, _at the same time_ as
being used by RTLinux threads.)

For safer development, and for users that can live with some 5 ms of
latency (new kernel patches by Ingo Molnar, in case you haven't heard...
:-), there will also be a user space SCHED_FIFO version of the engine,
that doesn't need RTLinux.

Both engine implementations will load and run the same plug-in binaries,
and in theory (if someone feels like porting an ELF loader), they could
be binary portable across all platforms using the CPU the code was
compiled for. (However, I don't care much for Windoze and MacOS anymore,
and nothing, not even BeOS, gets close the the RTLinux performance,
so...)

Some very, very preliminary API specs I started hacking the other day is
attached if you want to have a look... Doesn't explain much perhaps, but
at least gives a hint about the most important thing; the minimal impact
of flexibility on performance. That is, there will be no ultra flexible
handle-anything-you-could-imagine function call interfaces to be used
from the processing code, as that would kill performance completely,
especially when dealing with very small buffers. Not needed, and only
generate bugs... If VST, TDM or DirectX does the job in your studio (or
whatever), you should certainly not miss anything here. Oh, and no one
will charge $1000, ask you to sign an NDA, or even require that you
don't improve or change the API in your engine, if you decide you want
the SDK. ;-)

So, what is it you want, more specifically? Would VST do, or do you want
video streaming, network transparency and other stuff that has nothing
to do with low latency real time? Can't have it all under one API for
performance and complexity reasons, but I'm open to any suggestions;
sane or insane! Still time for some more brainstorming before I start to
hack engine code.

Let's get this right from the start and whipe the floor with DirectX and
VST 2.0, so we get some serious plug-in developers over to Linux soon.
:-)

//David


plugin.h


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : pe maalis 10 2000 - 07:25:52 EST