[linux-audio-dev] sound API libraries, servers, etc.

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: [linux-audio-dev] sound API libraries, servers, etc.
From: Paul Davis (pbd_AT_Op.Net)
Date: Fri Apr 20 2001 - 02:58:33 EEST


in the last couple of days, i've seen a couple of announcements of new
libraries designed to "abstract" various audio APIs.

i want to rant for a moment. apologies in advance if i step too
heavily on anyone's toes.

i just don't get the idea of these libraries. IMHO, there is only one
sensible kind of audio API library that should be developed beyond the
so-called "native" ones, and thats a *SERVER* API to allow multiple
apps to collaborate, cooperate and possibly, though not necessarily,
communicate, all in a device and device-API independent fashion.

at the moment, i see
   
   * alsa-lib (libasound)

         - provides for this functionality
         - private to ALSA
         - no actual "server" process involved,
               so collaboration is limited to device
               sharing.

   * aRts
     
         - comprehensive server
         - full collaboration and sharing
         - incorrect support for high end devices
         - out-of-process clients

   * AES

         - geared primarily to low latency, high performance
         - collaboration limited to shared use of
                audio interfaces, and inter-client
                audio transfer via internal busses
         - in-process clients

the libraries that i've seen announced attempt to wrap existing
libraries that either:

    * already or will shortly provide abstractions of "all" possible
        audio operations (e.g. alsa-lib)
    * cannot provide "all" possible audio operations (e.g.
        OSS API)

so what is the point of the wrappers?

well, there is a very obvious point, and that is allowing people to
write applications that can be compiled in the presence of different
audio device APIs.

but...i really strongly and strenuously do not believe that
new libraries are the way to do this. the way forward is for
applications to use existing server architectures that take care of
all that low-level device access crap, and leave them free to do their
thing without paying any attention to those details. One way or
another, we're fundamentally talking "plugins" here, though at a
different level, mostly, than LADSPA offers.

aRts, having been adopted by at least KDE, is a good target for
applications not requiring low latency/high performance. its robust,
extremely flexible, and provides many services that may prove useful
as applications expand in scope.

AES, being totally new, is not well-tested, but it does run two
audio applications very well, and provides an extremely efficient
environment for "pro audio" applications and hardware.

if aRts could adopt the model that AES uses for h/w access and offer
in-process clients, i would be happy to recommend aRts for
everything. in the meantime, either of these solutions is a better
long term solution for people wanting to write device-independent and
device-API-independent applications.

and if neither of them satisfy, then i strongly urge all members of
this list (*LINUX* audio development) to use ALSA natively, not
re-wrapped in another library.

--p


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Fri Apr 20 2001 - 03:20:46 EEST