Re: [linux-audio-dev] CSL-0.1.2 Release

New Message Reply About this list Date view Thread view Subject view Author view Other groups

Subject: Re: [linux-audio-dev] CSL-0.1.2 Release
From: Paul Davis (pbd_AT_Op.Net)
Date: Fri Jun 08 2001 - 14:29:50 EEST


>> >set_pcm_playback_mode(handle, ALSA_NON_INTERLEAVED |
>> >ALSA_SAMPLETYPE_FLOAT, channelcnt)
>>
>> what does this have to do with the work that most applications want to
>> get done?
>
>I just wanted to point out that ALSA has too many API functions for my
>taste and is not really suited to be suited by 90%+ of all applications.
>I don't have the feeling that the API is stabilized yet, because for
>every new feature new function calls and types are introduced. Or maybe
>I'm missing something here? I have to admit that I didn't get the plugin
>concept yet: plug:, hw:, ... What functionality is hidden behind it? What
>sorts of plugs are there anyway.

Your missing something. I've written about this on alsa-devel many
times.

1) ALSA follows a simple philosophy: the low level device drivers
   present in the kernel export the true functionality of the audio
   hardware to user space. That is, if the card cannot do anything
   other than 16 bit interleaved stereo, then the device driver
   doesn't do anything to pretend otherwise. If it can't do anything
   other than 26 channels of 24-in-32 bit non-interleaved audio, then
   again, the device driver doesn't pretend otherwise.

2) All the work involved in allowing a device to be used in a way not
   supported by its hardware capabilities is done in user-space via
   alsa-lib. This includes interleaving issues, sample bit width
   conversion, sample rate conversion, and so on and so forth.

3) If you use the "hw:dev,subdev" interface, you are bypassing the
   capabilities of alsa-lib to provide arbitrary capabilities, and are
   using a device that can do only what the underlying hardware can
   do.

4) If you use the "plug:dev,subdev" interface, you are using the
   full capabilities of alsa-lib to mask over any particular aspects
   of the hardware; alsa-lib will allow you to request any parameter
   configuration that can be achieved by data transformation and
   other clever tricks.

So, if you want to be able to open your Hammerfall and play a 16bit
stereo interleaved file through it without any work on your part, you
obviously cannot use the "hw:dev,subdev" layer, because the Hammerfall
hardware cannot possibly support this. However, if you used the
"plug:dev,subdev" layer, it works just fine.

Note, however, that the application doesn't need to change its code to
deal with either the hw or plug layers. All that changes is the PCM
name given to snd_pcm_open(). Clearly, in most instances, users should
be using the "plug" layer if they want to avoid problems, but
applications don't need to be recompiled if the user wants to use the
"hw" or "plug" layers on different occasions.

>ALSA. ALSA is not abstracted. The flexibility of ALSA is given through

alsa-lib is already highly abstracted, but poorly understood because
its poorly documented. its not as highly abstracted as i want LAAGA to
be, however.

>zillions of function calls that access the hardware more or less directly.

No they do not. Please make sure that you fully understand what I've
written above about ALSA.

>> what if something else is using the h/w in a different way?
>
>That's something the layer on top of the h/w should take care of.

I consider this incorrect, in a subtle way. I don't believe that
applications should be concerned about device configuration at all.

>> what if the goal is to move data between two applications, not to a
>> h/w device?
>
>Unix has pipes for this purpose, it's just a matter of defining a
>protocol. 90%+ applications probably don't need that level of realtime
>like ardour. Pushing the data through a pipe would suffice.

Unix pipes are about 5kB in size. Thats enough to hold 50msec worth of
(mono) 44.1kHz 16 bit data, or 26msec worth of 48kHz 32 bit data or
13msec worth of 96kHz 32 bit data. If you make the data interleaved,
even at 44.1kHz, you can only fit 26msecs of stereo data into a
pipe. Given that the default scheduling timeslice is 100msec, this
means that if another process uses its full timeslice, a pipe is not
large enough to hold the audio data for a single 16 bit stereo 44.1kHz
application across a pair of context switch. The application could use
RT POSIX scheduling to assist, but even this will fail if the setup is
not a low latency one and there are too many things going on.

Pipes are not the correct model when you want to move 20MB/sec or more
between applications. They weren't designed with this in mind.

As for the realtime requirements, here is a partial list of
applications that are supposed to be "realtime like Ardour":

             ardour
             jMax
             pd
             csound
             quasimodo
             terminatorX
             soundtracker
             taapir
             ultramaster synths
             rythmnlab
             greenbox
             muse
             jazz++

the list goes on. it just so happens that these are the kinds of
applications that we want to be sharing data in a low-latency,
real-time environment.

>I don't see yet, how two different process can call each others callbacks.

I will be posting the full implementation in a couple of days. Its
basically an IPC mechanism using shared memory, kill(2) and sockets.
I'm a little slower than usual because my daughter has finished school
for the year :)

>Or are we going to implement all our applications as plugins for one
>host that starts the application as thread?

No, not a plugins, but more generally as clients. LAAGA will make it
possible for clients to be in-process ("dlopen") or out-of-process
("ipc").

--p


New Message Reply About this list Date view Thread view Subject view Author view Other groups

This archive was generated by hypermail 2b28 : Fri Jun 08 2001 - 16:37:21 EEST