my take on the "virtual studio" (monolith vs plugins) ... was Re: [linux-audio-dev] ardour, LADSPA, a marriage

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: my take on the "virtual studio" (monolith vs plugins) ... was Re: [linux-audio-dev] ardour, LADSPA, a marriage
From: Benno Senoner (sbenno_AT_gardena.net)
Date: Thu Nov 16 2000 - 13:32:28 EET


Hi,

My take on the linux modular virtual studio argument
is basically in line with Paul BD, David O. and Tommi I. :
to achieve a perfect modular virtual studio, we do need
the "shared ojbect" (better not call it plugin as someone pointed out)
concept.

I disagree that LADSPA is the way to go:
LADSPA is an audio plugin API, while what we need is an inter-application
 routing / dispatching API.

Basically we need to route both audio data and events (which can be MIDI
data or some other control data like GUI events (set fader X of application Y to
level Z, or query the status of fader A in application B etc etc..)

We need an API which is able to accomodate complex apps like Ardour
or EVO so LADSPA is entirely out of question. (remember the S letter ?)

Plus the system needs to provide more thatn one datatype since float is
not going to solve all our problems.
for example let's assume we want to route 24bit audio between two apps
which work in the integer domain: with float we first do not have the guarantee
 to mantain bit-accuracy , plus all this back and forth float<->int conversion
is not what Intel CPUs really like.

Yes we are still in the vaporware-status with this kind of API, but
I'd prefer to do some research in order to obtain a flexible and well thought
out system, rather than implementing a quick and dirty solution.

Plus there is an additional important aspect:
SMP: do not understimate multiprocessing.
Soon these boxes will be dirt cheap ( dual celerons already are),
and the all-in-software DAWs needs all your available horsepower, so
exploiting the performance of SMP boxes is a MUST of the kind of API
I'm proposing.

(And I plan to take this a step further: one you implemented the concept
of multiprocessing, nothing prohibits you to extend this to a cluster of
networked machines. (although with higher latencies, but I think
dedicated 100Mbit LANs can provide reliable <10msec latencies))

Most things are solved in my head and the many private discussions with David
Olofson indicate that this model is the way to go.
(basically David's experiences with Mucos will come handy to design our
"virtual studio" API)

And converting monolithic apps to this modular model is easier than you
may think, as long as you separate the audio handling thread from the rest
and use lock-free communication to exchange data with other parts of your app.
(GUI , disk etc).
Plus consider the fact that apps that runs as "plugins" only , without being
able to run standalone, are much easier to implement, since you can leave
out all the boring audio I/O part, plus all the tricks to get low latencies.
It's all handled by the "virtual studio" API.

Some time ago I asked about the internals of ardour, and Paul confirmed that
the app is fully ready for the model, which is a good thing.

Meanwhile all developers of linux audio apps which in future want their apps
 "virtual studio"-capable , should obey to the rules I described above.
(non-blocking audio thread which uses lock-free communication with the
outside world).
If you observe these rules, it will be very easy to support the API,
and you will get low latencies instantly perfectly in sync with other
 N apps running concurrently. (CPU permitting)

With such an API there will be almost no disadvantages (from a control POV) to
running monolithics apps, but you have the great advantage being able
to patch all virtual audio/midi I/Os in arbitrarily.
(most of (or perhaps all, this has to be determined) them in realtime.
( eg. Ardour is just sending 8 channels to ADAT port1, and then you hook up
EVOs stereo output to channels 1-2 of that ADAT port without any pop or
crackle)

I think the best way to go is that the "virtual studio" API has only acess to
data exported by the app.
That way we reach a two level abstraction model:

-the soundserver which schedules the execution of audio-apps (shared libs)
and the flow of data between them. (audio/midi etc).
-the audio app itself which manages its internal stuff, like running
 chain of LADSPA plugins and doing other fancy stuff.

There will be a sort of audio-manager , which will let you specify the desired
routing. (GUI based)

That means a typical audio session would consist in firing up the audio manager,
load the desired "apps" like HD recorders, sequencers, soft-samplers etc,
wire them toghether as needed , save the routing setup (in a project file)
for successive sessions, and finally begin to make music.

Just like in the analog world, but without electrical noise, added latency
or a jungle of cable and/or limitations in the number of inputs and outputs.

Benno.


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Thu Nov 16 2000 - 13:49:32 EET