Subject: Re: [linux-audio-dev] Re: [alsa-devel] Toward a modularization of audio component
From: Paul Davis (pbd_AT_Op.Net)
Date: Fri May 04 2001 - 14:39:50 EEST
>> Its a cool (though not new) model.
>
>I've never claimed it's a new idea and I'm not sure to understand why
>you underline this.
I wasn't trying to be insulting. Sorry.
>> However, I don't think that its at all suitable as a model for a
>> pseudo-RT system, which you appear to be aiming for.
>
>Why?
Because you're wasting so much time on house-keeping and flexibility.
Once you adopt this model:
>> * an audio-interrupt driven model, in which a list of
>> functions are called that cause the processing or synthesis
>> of a specified number of frames of audio, and their delivery
>> (if appropriate) to an output location.
>
>This is exactly what happens with the model proposed.
which it sounds as if you are, then you can use a very simple API.
>> the ALSA PCM API is an excellent API for interacting with audio
>> interfaces. However, I think its a much weaker API for handling audio
>> in a way that is conceptually independent of an audio interface. Yes,
>> I know that alsa-lib actually *is* independent of an audio
>> interface. But its entire model of operation is rooted in concepts
>> that stem from hardware.
>
>I don't exclude that ALSA PCM API will need some adjustements to better
>suit this model.
Its not that it needs adjustments as much as simplification. As I
said, its great for interacting with "devices"; however as a "plugin"
API, it seems way too complex to me.
>An audio producer/consumer may easily force a sample format and
>contrarily to what you seems to affirm flexibility is good and not evil
>if it comes cost free.
it does not come cost free. we went through this when working on
LADSPA. the *possibility* of a multiplicity of data formats introduces
complications for everyone, from users down to programmers.
>Please understand that we're discussing a way to reuse audio programming
>efforts, and to use the same API for everything has huge advantages.
It has advantages, but it also has costs. Please remember Abramo, as I
know you do, that I am particularly focused on applications that can
be used by professional/commercial recording studios, often for
real-time processing. I am not convinced that there is a single
approach that will support all needs, but if there is, I think that
its more likely that it will be shaped primarily by the needs of low
latency/real time applications. Other applications can mostly be
characterized by just loosening up the characteristics of the RT
ones. Those needs translate easily into a very simple API (as has been
suggested here with some concrete examples from Kai and myself).
>It's like to have several tools in your sound studio, but all have the
>same connectors (this is the model I had in mind with this proposal, as
>you see it's a very old idea ;-)
Yes, for example Steinberg call it Virtual Studio Technology (VST),
and it doesn't look anything like the ALSA PCM API.
I think we can do better than VST and ReWire (or TDM or DirectX or
MAS), but we can also learn a lot from it.
The point of LADSPA was to be able to share audio programming efforts
on the DSP level. It doesn't use an ALSA-like API at all. The point of
LAAGA is to be able to share programming efforts and have them
interact at a somewhat higher level. You seem to feel that the ALSA
PCM API, or a somewhat modified version of it, is an appropriate model
with which to do that. I think that a much simpler API is possible,
and more appropriate, and I think we have some good examples of the
kind of API I mean already.
--p
This archive was generated by hypermail 2b28 : Fri May 04 2001 - 15:12:57 EEST