Re: my take on the "virtual studio" (monolith vs plugins) ... was Re: [linux-audio-dev] ardour, LADSPA, a marriage

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: my take on the "virtual studio" (monolith vs plugins) ... was Re: [linux-audio-dev] ardour, LADSPA, a marriage
From: David Olofson (david_AT_gardena.net)
Date: Fri Nov 17 2000 - 10:05:12 EET


On Thu, 16 Nov 2000, Benno Senoner wrote:
[...]
> Basically we need to route both audio data and events (which can be
> MIDIdata or some other control data like GUI events (set fader X of
> application Y to level Z, or query the status of fader A in
> application B etc etc..)
>
> We need an API which is able to accomodate complex apps like Ardour
> or EVO so LADSPA is entirely out of question. (remember the S letter ?)

Besides, there's nearly always some run-time overhead involved with
these kind of extensions - it might be a good idea to try to keep
LADSPA *fast* and simple, while using somethings else for higher
level stuff. (C vs. C++ or something.) If not, we might end up
without an API that's efficient enough for small "unit generator"
style plugins... And without an API that people wanting to hack
*simple* plugins can grasp, for that matter!

As for the GUI stuff, how about doing the DSP plugin + GUI plugin
thing I had in mind for MuCoS, but keep the DSP plugin LADSPA, when
that's sufficient...? (It *might* not be as flexible, but I think
most of the plugin<->outside world interfacing issues can be handled
by the hosts. Can't see any serious problems.)

> Plus the system needs to provide more thatn one datatype since float is
> not going to solve all our problems.

Right; I have a few models for this by now, ranging from very, very
basic designs, such as enumerating a fixed set of API built-in
datatypes, nearly all the way to IDL.

I think I'll stop slightly above the fixed datatype set variant;
there will be a bunch of built-in datatypes as of the 1.0 API, but
hosts should be able to use translator plugins or libraries to deal
with newer or custom types. (Note: No need to understand types as
long as you're connecting ports with the same type ID - just move
the raw data.)

I'm *not* going to include low level binary format info in the data
type descriptors, as it's simply not worth the effort. I can't see
plugins adding that many new data types that hosts will ever need to
worry about translating, so it's not worth the incredible added
complexity.

> Yes we are still in the vaporware-status with this kind of API, but
> I'd prefer to do some research in order to obtain a flexible and well thought
> out system, rather than implementing a quick and dirty solution.

This ain't easy, I tell ya'... :-) Anyway, I'm open to further
discussion. I have a design that I think is quite sensible more or
less designed, and 50% coded, but I'm open to ideas. (Of course, the
point with the first release is to take out the chainsaws and axes
and bash away - this is far too complex to be solved at the first
attempt at an implementation.

(Then again, I might have thought of everything already! ;-)))

> Plus there is an additional important aspect:
> SMP: do not understimate multiprocessing.
> Soon these boxes will be dirt cheap ( dual celerons already are),
> and the all-in-software DAWs needs all your available horsepower, so
> exploiting the performance of SMP boxes is a MUST of the kind of API
> I'm proposing.

Shouldn't get worse than an extra buffer of latency per CPU or
something, but then I haven't really tried to load a Linux/lowlatency
SMP system in this way.

Note: I'm not talking about splitting the net in partial chains that
        are connected end-to-end, but running the same net on all CPUs,
        interleaved. This keeps the connection buffers hot, but forces plugin
        state data to migrate from CPU to CPU... (No big deal on
        systems with a separate full speed inter-CPU bus, but these
        are still rare and very expensive.)

        Very good for plugins with little state info (such as most
        IIR filters), but painfull for plugins that access lots of
        private buffers to generate every audio buffer (reverbs...)

        The problems with doing it the other way around is that the
        CPUs will depend on each other all the time, and that it's
        very hard to load-balance properly, especially with small
        nets of heavy plugins.

        The cycle/interleaved approach needs only spinlocks to make
        sure the same plugin instance isn't trying to run on more than one
        CPU at a time. The contemption should be very low in the
        normal case, but high scheduling jitter peaks will "domino"
        from CPU to CPU.

        (Hey, does anyone understand this model by new? ;-)

> (And I plan to take this a step further: one you implemented the concept
> of multiprocessing, nothing prohibits you to extend this to a cluster of
> networked machines. (although with higher latencies, but I think
> dedicated 100Mbit LANs can provide reliable <10msec latencies))

This is where the above scheduling model *really* gets interesting.
It basically transforms direct CPU->CPU dependencies ("Hey, I NEED
THAT BUFFER *NOW*!!!") into dependencies on the bandwidth being
sufficient to bring data where it's needed in time. If the number of
plugins is significantly higher than the number of CPUs, there are
quite some margins to play with; ie it will take some time from a
plugin returning to the hosts until it has to be ready to start on
the next CPU, processing the next buffer. (Remember that the state
data has to migrate. In a cluster, that will be what you send over
the SAN, meaning that all plugins are effectively running on all
nodes, non-constant data flying around! Also note that some plugins
will have huge problems running in this kind of environment...)

> Most things are solved in my head and the many private discussions with David
> Olofson indicate that this model is the way to go.
> (basically David's experiences with Mucos will come handy to design our
> "virtual studio" API)

Well, we are not talking sub 5 ms for a cluster for yet a while, but
it should beat the current MacOS and Windows software solutions. (And
eat them alive when it comes to processing power of course, but
that's of little value if the latency is unworkable.)

> And converting monolithic apps to this modular model is easier than you
> may think, as long as you separate the audio handling thread from the rest
> and use lock-free communication to exchange data with other parts of your app.
> (GUI , disk etc).
> Plus consider the fact that apps that runs as "plugins" only , without being
> able to run standalone, are much easier to implement, since you can leave
> out all the boring audio I/O part, plus all the tricks to get low latencies.
> It's all handled by the "virtual studio" API.

That's an interesting side effect...

> Some time ago I asked about the internals of ardour, and Paul confirmed that
> the app is fully ready for the model, which is a good thing.
>
> Meanwhile all developers of linux audio apps which in future want their apps
> "virtual studio"-capable , should obey to the rules I described above.
> (non-blocking audio thread which uses lock-free communication with the
> outside world).
> If you observe these rules, it will be very easy to support the API,
> and you will get low latencies instantly perfectly in sync with other
> N apps running concurrently. (CPU permitting)
>
> With such an API there will be almost no disadvantages (from a control POV) to
> running monolithics apps, but you have the great advantage being able
> to patch all virtual audio/midi I/Os in arbitrarily.
> (most of (or perhaps all, this has to be determined) them in realtime.

There are issues with plugins that have to reinitialize when ports
are connected/disconnected, but the way most plugins work (fixe
number of ports) suggests that this is not an issue - it's just a
matter of the host changing the buffer pointers before the plugin is
called the next time. (Some APIs require a call to the plugin to
change a port - this may be a problem, depending on what that call is
allowed to do.)

> ( eg. Ardour is just sending 8 channels to ADAT port1, and then you hook up
> EVOs stereo output to channels 1-2 of that ADAT port without any pop or
> crackle)

Should work as long as the engine is already running all hardware
involved, and the plugins don't do silly things when ports are
connected. (I can't see why they should, really, but I might be
missing something...)

> I think the best way to go is that the "virtual studio" API has only acess to
> data exported by the app.
> That way we reach a two level abstraction model:
>
> -the soundserver which schedules the execution of audio-apps (shared libs)
> and the flow of data between them. (audio/midi etc).
> -the audio app itself which manages its internal stuff, like running
> chain of LADSPA plugins and doing other fancy stuff.
>
> There will be a sort of audio-manager , which will let you specify the desired
> routing. (GUI based)

I'm more into this "a plugin can be a net" thing... How about the
soundserver being the top-level host, while *all* applications
actually are plugins? Any points in their nets that the application
wolud like to connect to other applications, would be published in
the form of ports on that application's plugin interface. The
soundserver than connects ports just like any other plugin host.

> That means a typical audio session would consist in firing up the audio manager,
> load the desired "apps" like HD recorders, sequencers, soft-samplers etc,
> wire them toghether as needed , save the routing setup (in a project file)
> for successive sessions, and finally begin to make music.

Of course, but why two levels of APIs and connections? :-)

> Just like in the analog world, but without electrical noise, added latency
> or a jungle of cable and/or limitations in the number of inputs and outputs.

That's the really interesting part! :-)

//David

.- M u C o S -------------------------. .- David Olofson --------.
| A Free/Open Source | | Audio Hacker |
| Plugin and Integration Standard | | Linux Advocate |
| for | | Open Source Advocate |
| Professional and Consumer | | Singer |
| Multimedia | | Songwriter |
`-----> http://www.linuxdj.com/mucos -' `---> david_AT_linuxdj.com -'


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Fri Nov 17 2000 - 13:11:00 EET