Re: [linux-audio-dev] A few Q's

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] A few Q's
From: David Olofson (david_AT_gardena.net)
Date: la maalis 11 2000 - 20:04:27 EST


On Fri, 10 Mar 2000, Iain Sandoe wrote:
> 1/
>
> I am having difficulty seeing the overall architecture of what is being
> attempted by the various solutions LADSPA/Alsa/MuCoS etc.

LADSPA - simple, low level VST 1.0 like plugin API.
         There are input and output ports for audio
         (a buffer per run() call) and control data
         (a value per run() call).

MuCoS - somewhat like the event extensions of VST 2.0,
        but far more generic. Actually, it would be possible
        to use only the event system for everything that
        LADSPA does (including audio ports), and a lot more.

ALSA - The audio hardware driver architecture that will
       probably be the future standard on Linux. Supports
       both the OSS/Free API and a much more powerful API
       of it's own. The ALSA API can handle multichannel
       pro audio hardware nicely.

LADSPA and MuCoS are probably going to end up as a single API, where
the MuCoS (event) part is optional, or with LADSPA as a low level
API for "operators" that can be run by minihosts in the form of MuCoS
plugins or clients.

(Note: My definition of "plugin" is a dynamically loadable module
with callback functions that run in the host's thread, while a
"client" is a separate thread or application that connects to other
clients through a special IPC style API.)

> Is the a hint/URL where the overall plan is discussed?

No official overall plan, but the general ide is to create a plugin
and client/server API that most of us are happy with using for our
projects. This will allow applications to interact in a useful way,
and to use the same plugins.

The MuCoS idea would allow drivers to be encapsulated as plugins or
as clients, kernel clients will be possible, and many plugins will
run in kernel space if desired, but I don't see MuCoS (or what it'll
eventually be called) replacing ALSA or OSS as a driver API. I don't
think any API could be *that* flexible, efficient and still possible
to understand w/o a degree in computer science...

> I.E. - How do I give the non-kernel-hacker musician/studio engineer
> something that he/she can relate to in terms of everyday studio functions?

Linux is more or less still lacking a solution for this. That's what
we're in the middle of trying to fix.

> 2/
>
> Latency: - is a performance issue (only - I think) there are solutions to
> record/playback systems: Note:
>
> - It takes 2ms for sound to get from a typical floor monitor to your ear.
> - It takes sound 23ms to get from one end of my studio to the other (this is
> a little disconcerting when playing the piano at one end).
> - not many people can afford in-ear monitoring.

What about monitoring through headphones in the studio? I don't like
the "hack" solution of bypassing the digital system just for this.
That doesn't work if you want to run some plugin on an instrument
when recording, in order to hear what you're doing.

> - a lot of plugs need some history of samples to be effective (OK, so there
> are usually IIR-style variants) - but (somehow) the plug must be able to
> tell the framework what "it's" latency is.

I think this is an important feature that is needed anyway, and it's
not only related to low latency processing. You get the same problem
when mixing the output of plugins running in parallel, and when using
feedback loops.

I don't see why the API should break low latency real time processing
just because lots of users don't use it today. (How can they when
it's not possible!?) Personally, I've just had it with native
processing systems feeling like a budget alternative to "real"
equipment. There is no real technical excuse for this.

> 3/
> LinuxPPC:
>
> Is there a LinuxPPC developer on/involved in the list?
> (Lies, damned lies & benchmarks - in the real world - I get 1.3 times the
> performance from a 300MHz G3 that I do from a 500MHz PIII - which maters for
> multi-plug solutions).

I'm not surprized... RISCs have been beating CISC CPUs on this kind
of stuff all the time, and they'll probably keep doing it for a
good while. I think the reason for that is that no workstation CPUs
are really suited for DSP, while RISCs usually adapt better to things
they're not designed for. That's because they don't waste chip space
and decoding overhead on all those CISC style "high level"
instructions that are pointless in DSP code.

> 4/
> "They won't ever want to do that"
>
> Not true - most of the "Advances" in music happen from just that (whether
> you consider them musical or not).

Exactly. For the same reason, I think that all plugins that can be
capable of lowlatency real time processing should be. There's a
*huge* difference between realtime "fiddling" and off-line editing.

//David

.- M u C o S --------------------------------. .- David Olofson ------.
| A Free/Open Multimedia | | Audio Hacker |
| Plugin and Integration Standard | | Linux Advocate |
`------------> http://www.linuxdj.com/mucos -' | Open Source Advocate |
.- A u d i a l i t y ------------------------. | Singer |
| Rock Solid Low Latency Signal Processing | | Songwriter |
`---> http://www.angelfire.com/or/audiality -' `-> david_AT_linuxdj.com -'


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : su maalis 12 2000 - 09:14:06 EST