Re: [linux-audio-dev] audio routing (was: Re: LADSPA GUI Issues)

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] audio routing (was: Re: LADSPA GUI Issues)
From: David Olofson (david_AT_gardena.net)
Date: Wed Mar 15 2000 - 23:44:59 EST


On Mon, 13 Mar 2000, Kai Vehmanen wrote:
> 3. MuCoS
>
> This seems to be a good answer, no matter what the question is. :)

Well, as they say, nothing can be as good as vapourware... ;-)

> David, what's your view on the "MuCos <=> audio app connecting" issue?

Well, not much practical details yet - I've still not managed to
publish usable code for the event system, and Benno is the one who
have actually tried some IPC solutions that might be usable.

Anyway, the idea is to make the client/server API pretty similar to
the plugin API, meaning basically that clients have event ports, just
like plugins. There is actually just about one significant
difference between plugins and clients:

* Clients run as threads, while plugins
  run as callback functions.

This means that while a plugin "sleeps" until the next host
cycle/buffer by returning to the host (the host normally dictates when
this happens by setting a terminator event, making the plugin behave
very much like a LADSPA or VST 1.0 plugin), a client just goes to
sleep whenever it feels like it, as any other thread.

Event ports an plugins act merely as an interface to a "buffer" of
events, which works pretty much like an audio data buffer, only it's
structured (and can contain any number of commands to commit changes
of various parameters) rather than an array of values. Event ports on
clients work more like FIFOs or event queues, and there is no real
perception of the cycle/buffer time frame that a normal plugin host
has, when dealing with them.

Timestamps for plugins are relative to the start of the current cycle
(run() call), while it's continously running for clients, as an
obvious effect of there not being any dictated host cycles/buffers.

Now, what this means is that applications can be connected to each
other the same way as plugins inside a host. The significant
difference from the user POV is that they may have to deal with the
issues involved with running multiple SCHED_FIFO threads in parallel,
increasing the lowest drop out free latency.

The sample accurate sync is still there, the event system is the
same (plugins inside one host can easily be "directly" connected to a
plugin inside another host by just gatewaying the events), and the
event system is flexible enough to carry just about anything. (There
will be an official, generic specification of how to send events with
references to external data buffers - "huge events".) Implementing a
kernel audio card driver as a MuCoS client would be possible, if
someone sees a point in it, including wavetable support with sample
upload etc. It's "just" a matter of defining what events and channels
to use to make it a usable standard.

So, it might actually turn out to be a good answer to almost
anything, but it has to be implemented first, and we have to agree
on some useful controller range standard and all that... *hehe*

//David

.- M u C o S --------------------------------. .- David Olofson ------.
| A Free/Open Multimedia | | Audio Hacker |
| Plugin and Integration Standard | | Linux Advocate |
`------------> http://www.linuxdj.com/mucos -' | Open Source Advocate |
.- A u d i a l i t y ------------------------. | Singer |
| Rock Solid Low Latency Signal Processing | | Songwriter |
`---> http://www.angelfire.com/or/audiality -' `-> david_AT_linuxdj.com -'


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Thu Mar 16 2000 - 07:23:12 EST